There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

@QuadratureSurfer@lemmy.world avatar

QuadratureSurfer

@[email protected]

This profile is from a federated server and may be incomplete. Browse more on the original instance.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Just skip ahead to 29:45 and you finally get to the actual explanation of various invisible walls. Time-linked here: youtu.be/YsXCVsDFiXA&t=1785s

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Depends on your work, what you’re trying to do, and how you use it.

As a developer I run my own local version of Dolphin Mixtral 8x7B (LLM) and it’s great at speeding up my productivity. I’m not asking for it to do everything all at once but usually just small snippets here and there to see if there’s a better or more efficient way.

I, for one, am looking forward to hardware improvements that can help us run larger models, so news like this is very welcome.

But you are correct, a large number of companies misunderstand how to use this technology when they should really be treating it like someone at an intern level.

It’s great to give small and simple (especially repetitive) tasks, but you’ll still need to verify everything.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Do the majority of users really want AI in their computers?

What this could mean is the ability to replace (or upgrade) something like Siri into a model that runs locally on your machine. This means that it wouldn’t need to route your questions/requests through someone else’s computer (the cloud). You wouldn’t even need to connect the computer to the internet and you would still be able to work with that model.

Besides, there are many companies that don’t want you to pass on their internal documents to companies like OpenAI (ChatGPT). With locally run models, there aren’t any problems with this as that data will not be uploaded anywhere.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Well that’s a loaded question.

There are probably some websites that let you try out the model while they run it on their own equipment (or have it rented out through Amazon, etc.). But the biggest advantage to these models is being able to run it locally if you have the hardware to handle it (beefy GPU for quicker responses and a lot of RAM).

To quickly answer your question, you can download the model from here:
huggingface.co/…/dolphin-2.5-mixtral-8x7b-GGUF
I would recommend Q5_K_M.

But you’ll also need some software to run it.

A large number of users are using “Text-Generation-WebUI” github.com/oobabooga/text-generation-webui
There’s also “LM Studio” lmstudio.ai
Ollama github.com/ollama/ollama
And more.

I know that LM Studio supports Both NVIDIA and AMD GPUs.
Text-Generation-WebUI can support AMD GPUs as well, it just requires some additional setup to get it working.

Some things to keep in mind…
Hardware requirements:

  • RAM is the biggest limiting factor with which model you can run while your GPU/CPU will decide how quickly the LLM can respond.
  • If you can fit the entire model inside of your GPU’s VRAM you’ll get the most speed. In this case I would suggest using a GPTQ model instead of GGUF huggingface.co/…/dolphin-2.5-mixtral-8x7b-GPTQ
  • Even the newest consumer grade GPUs only have 24GB of VRAM right now (RTX 4090, RTX 3090, and RX 7900 XTX). And the next generation of Consumer GPUs are looking like they will be capped at 24GB of VRAM as well unless AMD decides this is their way of competing with NVIDIA.
    GGUF models let you compensate for VRAM limitations by loading the model first in VRAM and anything leftover will get loaded into system RAM.

Context Length: Think of an LLM like something that only has a fixed amount of short term memory. The bigger you set the context length, the more short term memory you can give it (the maximum length you can set depends on the model you’re using and setting it to the max also requires more RAM). Mixtral 8x7B models have a Max context length of 32k.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Well if you read OpenAI’s terms of service, there’s an indemnification clause in there.

Basically if you get ChatGPT to say something defaming/libellous and then post it, you would foot the legal bill for any lawsuits that may arise from your screenshot of what their LLM produced.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

It’s been a long time since I’ve seen that… but I mostly watch YouTube through Grayjay, or on a browser with adblocking enabled.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

You don’t use any adblockers on YouTube?

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Ah, sorry, I didn’t realize that there was an ad-blocker that didn’t block the premium prompt.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

I’m gonna start spreading this to old subreddits and forums I used to frequent where games have been shut down.

The BBC Won't Use AI to Promote Doctor Who Again After Being Yelled at by Fans (gizmodo.com)

The backlash was immediate, but it didn’t stop the BBC from using text generated by LLMs—and purportedly checked and copy-edited by a human before approval—in two marketing emails and mobile push notifications to advertise Doctor Who. But now, the corporation will stop the experimentation entirely after a wave of official...

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

The best way to handle LLMs is to treat them like an intern. They’re useful and can get a lot of work done, but you need to double check their work.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Direct link to the video: x.com/ModdedQuad/status/1771298116719002100

Mario Kart section happens a little after 1/3 of the way through.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Not sure who’s downvoting you, but for anyone else wondering it’s called “Douyin”.

First sentence of the Wikipedia article on TikTok: en.m.wikipedia.org/wiki/TikTok

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Looks like the Nordic countries have some of the best protections for their press.

rsf.org/en/index

The U.S. is ranked around 45th which is disappointing considering the first amendment is supposed to guarantee freedom of the press.

But in general western countries are far better than places like Russia, China, India, the Middle East, etc.

Apple to allow iOS app downloads direct from websites in the EU (with restrictions), in compliance with the Digital Markets Act (www.pcmag.com)

Developers interested in distributing iOS apps on their websites also have to cross a high bar. This includes being registered or incorporated in the EU, being a member of “good standing in the Apple Developer Program for two continuous years or more,” and having an app that received “more than one million first annual...

QuadratureSurfer , (edited )
@QuadratureSurfer@lemmy.world avatar

Did they really just pull a Unity move with charging per download??

This is not going to be good for any developers that sit in that danger zone of offering a free app with in-app purchases. If they don’t make enough money (over €500k) once they hit that 1 million download threshold… they could owe more money than they make.

Edit: Looks like the first million downloads are always free for that year, but anything after that and they start charging per download. Still bad for free apps if they grow a lot without getting much income from their users.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

I have rarely used it.

I used it for some Microsoft product that you had to buy to be able to view the thumbnails of iPhone pictures natively in Windows Explorer.

I also used it for setting up WSL with Ubuntu or some other Linux Distro.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Liquid metal, self healing… wait a minute, I’ve seen where this leads! www.youtube.com/watch?v=5ivS5Cw3eyk&t=117s

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Same. Started off with MySpace, but everyone else went to Facebook.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

It’s worth noting that one reason grills can get so large is to have better cooling for larger engines.

With a larger grill you get more air flowing through the radiator which allows the engine to be more efficient.

Electric vehicles don’t need the same kind of cooling that ICE engines do, so having an electric Truck/SUV would allow for different designs which could be beneficial for pedestrians if they were struck.

Older Computer Programmers & Engineers

Lately, I was going through the blog of a math professor I took at a community college back when I was in high school. Having gone the path I did in life, I took a look at what his credentials were, and found that he completed a computer science degree back sometime in the 1970s. He had a curmudgeonly and standoffish...

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Computer Engineering is still a degree where you combine both Computer Science courses with Electrical Engineering courses.

You typically want to go this route if you want to be the kind of person that can create the logic for next generation GPUs/CPUs or if you like working with where hardware meets programming.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

While this is terrible, how is this tech news?

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Direct link to the GitHub repo:
github.com/nickbild/local_llm_assistant?tab=readm…

It’s a small model by comparison. If you want something that’s offline and actually closer to comparing to ChatGPT 3.5, you’ll want the Mixtral 8x7B model instead (running on a beefy machine):

mistral.ai/news/mixtral-of-experts/

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

I’ve got it running with a 3090 and 32GB of RAM.

There are some models that let you run with hybrid system RAM and VRAM (it will just be slower than running it exclusively with VRAM).

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

No analogy is ever going to be perfect when you try to look into the details too much… that’s why it’s an analogy.

This tiny, tamper-proof ID tag can authenticate almost anything (news.mit.edu)

TL;DR MIT researchers have developed an antitampering ID tag that is tiny, cheap, and secure. It is several times smaller and significantly cheaper than the traditional radio frequency tags that are used to verify product authenticity. The tags use glue containing microscopic metal particles. This glue forms unique patterns that...

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

To clarify what OP meant by his ‘AI’ statement

The system uses AI to compare glue patterns […]

The researchers noticed that if someone attempted to remove a tag from a product, it would slightly alter the glue with metal particles making the original signature slightly different. To counter this they trained a model:

The researchers produced a light-powered antitampering tag that is about 4 square millimeters in size. They also demonstrated a machine-learning model that helps detect tampering by identifying similar glue pattern fingerprints with more than 99 percent accuracy.

It’s a good use case for an ML model.

In my opinion, this should only be used for continuing to detect the product itself.
The danger that I can see with this product would be a decision made by management thinking that they can rely on this to detect tampering without considering other factors.

The use case provided in the article was for something like a car wash sticker placed on a customers car.

If the customer tried to peel it off and reattach it to a different car, the business could detect that as tampering.

However, in my opinion, there are a number of other reasons where this model could falsely accuse someone of tampering:

  • Temperature swings. A hot day could warp the glue/sticker slightly which would cause the antitampering device to go off the next time it’s scanned.
  • Having to get the windshield replaced because of damage/cracks. The customer would transfer the sticker and unknowingly void the sticker.
  • Kids, just don’t underestimate them.

In the end, most management won’t really understand this device well beyond statements like, “You can detect tampering with more than 99 percent accuracy!” And, unless they inform the customers of how the anti-tampering works, Customers won’t understand why they’re being accused of tampering with the sticker.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

You’d have to read the article to know what they’re getting at.

The use case provided was for businesses like a car wash that puts a sticker on a car windshield. The ML model would be able to detect if the customer attempted to transfer the sticker from one car to another.

A pretrained ML model to detect this is actually a very good use case.

However, I think the implimentation of this as an “anti-tampering detector” is a dangerous route to tread since there are other factors that need to be considered.

More 128TB SSDs are coming as almost no one noticed this launch — another SSD controller that can support up to 128TB appeared paving the way for HDD-beating capacities (www.techradar.com)

More 128TB SSDs are coming as almost no one noticed this launch — another SSD controller that can support up to 128TB appeared paving the way for HDD-beating capacities::Phison quietly revealed an updated X2 SSD platform at CES

Are there any genuine benefits to AI?

I can see some minor benefits - I use it for the odd bit of mundane writing and some of the image creation stuff is interesting, and I knew that a lot of people use it for coding etc - but mostly it seems to be about making more cash for corporations and stuffing the internet with bots and fake content. Am I missing something...

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

AI is a very broad topic. Unless you only want to talk about Large Language Models (like ChatGPT) or AI Image Generators (Midjourney) there are a lot of uses for AI that you seem to not be considering.

It’s great for upscaling old videos: (this would fall under image generating AI since it can be used for colorizing, improving details, and adding in additional frames) so that you end up with something like: www.youtube.com/watch?v=hZ1OgQL9_Cw

It’s useful for scanning an image for text and being able to copy it out (OCR).

It’s excellent if you’re deaf, or sitting in a lobby with a muted live broadcast and want to see what is being said with closed captions (Speech to Text).

Flying your own drone with object detection/avoidance.

There’s a lot more, but basically, it’s great at taking mundane tasks where you’re stuck doing the same (or similar) thing over, and over, and over again, and automating it.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

“AI” is the broadest umbrella term for any of these tools. That’s why I pointed out that OP really should be a bit more specific as to what they mean with their question.

AI doesn’t have the same meaning that it had over 10 years ago when we used to use it exclusively for machines that could think for themselves.

Toyota cars collecting and potentially sharing location data and personal information, Choice says, and it's not the only car brand facing privacy concerns (www.abc.net.au)

Rafi Alam from CHOICE told The World Today: “When we looked at Toyota’s privacy policy, we found that these Connected Services features will collect data such as fuel levels, odometer readings, vehicle location and driving data, as well as personal information like phone numbers and email addresses.”...

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Not just phone numbers and email addresses, but a recent ruling by a federal judge allows them to record and collect text messages without worry:

theverge.com/…/automakers-collect-record-text-mes…

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Then they won’t get your messages are any other information specific to your device.

But cars don’t need that connection to phone home with all of the data that the car itself is picking up on. Cars today all have some sort of cheap connection so that they can pass on your data one way or another.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Yes? But as the person you are responding to has mentioned, they’re not after the individuals, they’re after the “ISPs who did nothing in response to piracy complaints.”

Having the IP address of those users will reveal which ISP they are using.

Just run a traceroute or tracert command against any website and you can see for yourself how your connection initially goes through your ISP before branching out to the rest of the internet.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Well, AI spaghetti videos certainly have come a long way: v.redd.it/hz89h0ikv7gc1

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Someone please correct me if I’m wrong, but isn’t the problem that Uranium has a half-life of a couple hundred million years, while the half life of beryllium is less than a second?

Only Beryllium-10 has a long half-life for beta decay. Adding another neutron drops that back down to a few seconds and additional neutrons drop it back to a fraction of a second. So as long as that specific type of Beryllium isn’t used, it would be fine, right?

Edit: www.thoughtco.com/beryllium-isotopes-603868

OpenAI's GPT Trademark Request Has Been Denied (tsdr.uspto.gov)

First, applicant argues that the mark is not merely descriptive because consumers will not immediately understand what the underlying wording “generative pre-trained transformer” means. The trademark examining attorney is not convinced. The previously and presently attached Internet evidence demonstrates the extensive and...

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

You can run it locally with an RTX 3090 or less (as long as you have enough RAM), but there’s a bit of a tradeoff in speed when using more system RAM vs VRAM.

QuadratureSurfer , (edited )
@QuadratureSurfer@lemmy.world avatar

They do have a free tier, and while it doesn’t auto request your data removal they can at least notify you which data brokers have your info so you can make the requests manually yourself. monitor.mozilla.org

Edit: The data removal features are currently available only in the US according to their FAQ:

Why is data removal only available in the US? When will it be available in my country?

Data removal is only available in the US because of legislation that allows data brokers to operate there. In many other countries and in regions like the EU, laws like GDPR prevent these websites from collecting and selling people’s personal information without their consent. We’re exploring ways to expand protection and personal data removal outside of the US where needed.

support.mozilla.org/en-US/kb/mozilla-monitor-faq

QuadratureSurfer , (edited )
@QuadratureSurfer@lemmy.world avatar

Mozilla Monitor used to be just for monitoring breaches but they have recently added in the ability for you to monitor your own personal information that databrokers have on you.

Edit: According to their FAQ it looks like this has geographic restraints, I’ll update my original comment.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

GrayJay has been great for Android, I haven’t had any issues watching YouTube.

Using the built-in Ad blocking on Brave browser (both Android and Desktop) I haven’t had any slowness or issues with YouTube at all.

QuadratureSurfer , (edited )
@QuadratureSurfer@lemmy.world avatar

It’s a pain to switch between accounts. It eats up a ton of CPU if I use it through my browser (unless I use it in Firefox). If I use it in Firefox I can’t get video/voice calls or join up on meetings.

On a mobile device (iOS): It randomly logs me out (more like it will timeout if I haven’t opened the app recently). Notifications aren’t reliable. If I join a meeting with some other group as a “guest”, I can go back to view my active chat, but then I can only hear audio from the meeting and can’t get back to see what’s happening in the meeting unless I leave the room and come back.

There’s more, but this is just off the top of my head.

The floppy disk refused to die in Japan - laws that forced the continued use of floppies have finally hit the chopping block (www.tomshardware.com)

The floppy disk refused to die in Japan - laws that forced the continued use of floppies have finally hit the chopping block::Floppy disks can finally make their way to the land of eternal slumber. Japan’s Ministry of Economy, Trade and Industry has abolished any requirement for applicants to use this ancient magnetic media...

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

In the future, are we going to run into this same issue with USB-C in Europe?

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

MS Teams. Works for chat, but not for receiving audio/video calls/meetings.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

Unfortunately I think that ship has sailed.

The meaning of AI has changed drastically within the past 10 years or so.

Back then ‘AI’ was a term reserved for Artificially Intelligent beings like Skynet, HAL, the machines from The Matrix, etc.

Today AI has been watered down to the point that we need to specify what kind of AI we’re referring to.

I’m not sure there’s a way to stop that unless you unleash a swarm of very convincing social media accounts across the internet all run by LLMs with the goal of correcting our current course… that or put them to work writing news articles like this one.

QuadratureSurfer ,
@QuadratureSurfer@lemmy.world avatar

It doesn’t break the law at all. The courts have already ruled that copyrighted material can be fed into AI/ML models for training:

towardsdatascience.com/the-most-important-supreme…

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines