There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

engadget.com

Coldgoron , to technology in NVIDIA’s new AI chatbot runs locally on your PC

I recommend jan.ai over this, last I heard it mentioned it was a decent option.

FaceDeer ,
@FaceDeer@kbin.social avatar

There's also GPT4All that I'm aware of.

Hawk ,

Or ollama.ai

PlexSheep ,
@PlexSheep@feddit.de avatar

I use huggingface.co/chat , you can also easily host open source models on your local machine

Poggervania , to technology in NVIDIA’s new AI chatbot runs locally on your PC
@Poggervania@kbin.social avatar

Can I sing the NVIDIA song with it?

femboy_bird ,

I had almost forgotten that existed

Thanks

General_Effort , to technology in NVIDIA’s new AI chatbot runs locally on your PC

That was an annoying read. It doesn’t say what this actually is.

It’s not a new LLM. Chat with RTX is specifically software to do inference (=use LLMs) at home, while using the hardware acceleration of RTX cards. There are several projects that do this, though they might not be quite as optimized for NVIDIA’s hardware.


Go directly to NVIDIA to avoid the clickbait.

Chat with RTX uses retrieval-augmented generation (RAG), NVIDIA TensorRT-LLM software and NVIDIA RTX acceleration to bring generative AI capabilities to local, GeForce-powered Windows PCs. Users can quickly, easily connect local files on a PC as a dataset to an open-source large language model like Mistral or Llama 2, enabling queries for quick, contextually relevant answers.

Source: blogs.nvidia.com/…/chat-with-rtx-available-now/

Download page: www.nvidia.com/…/chat-with-rtx-generative-ai/

GenderNeutralBro ,

Pretty much every LLM you can download already has CUDA support via PyTorch.

However, some of the easier to use frontends don’t use GPU acceleration because it’s a bit of a pain to configure across a wide range of hardware models and driver versions. IIRC GPT4All does not use GPU acceleration yet (might need outdated; I haven’t checked in a while).

If this makes local LLMs more accessible to people who are not familiar with setting up a CUDA development environment or Python venvs, that’s great news.

General_Effort ,

I’d hope that this uses the hardware better than Pytorch. Otherwise, why the specific hardware demands? Well, it can always be marketing.

There are several alternatives that offer 1-click installers. EG in this thread:

AGPL-3.0 license: jan.ai

MIT license: ollama.com

MIT license: gpt4all.io/index.html

(There’s more.)

Oha ,

Gpt4all somehow uses Gpu acceleration on my rx 6600xt

GenderNeutralBro ,

Ooh nice. Looking at the change logs, looks like they added Vulkan acceleration back in September. Probably not as good as CUDA/Metal on supported hardware though.

Oha ,

getting around 44 iterations/s (or whatever that means) on my gpu

CeeBee ,

Ollama with Ollama WebUI is the best combo from my experience.

Aatube , to technology in Waymo issued a recall after two robotaxis crashed into the same pickup truck
@Aatube@kbin.social avatar

Why is an update called a recall?

Chozo ,

The fleet of cars is summoned back to the HQ to have the update installed, so it causes a temporary service shutdown until cars are able to start leaving the garage with the new software. They can't do major updates over the air due to the file size; pushing out a mutli-gigabyte update to a few hundred cars at once isn't great on the cellular network.

Jakeroxs ,

Actually there have been several Tesla “recalls” that were just simply OTA updates.

MNByChoice ,

They often are. Many recalls for other manufacturers are similar. They don’t actually buy back the cars and crush them.

Kbobabob ,

What typically happens when a recall is issued for other vehicles? Don’t they either remove and replace the bad part or add extra parts to fix the issue?

How is removing bad code and replacing it with good code or just adding extra code to fix the issue any different?

Do you want to physically go somewhere?

filcuk ,

Kinda, as the word implies. If it’s a software update, call it that; the car’s not going back to the shop/manufacturer.

Kbobabob ,

It sounds like location is important for some reason.

Jakeroxs , (edited )

Here’s an example of why I don’t like that they’re called recalls when it’s just a system update, if you have a recall on a food item, is there some way to fix it aside from taking it back (to be replaced) or throwing it away?

When there’s a security patch released on your phone, do we call it a recall on the phone? Or is that reserved for when there a major hardware defect (like the Samsung Note fiasco)

Kbobabob ,

I think the difference in the case you mentioned is that with a car they use recall because it could be dangerous to keep using it as is.

Jakeroxs ,

Fair, it just seems like there should maybe be a new word for this era where an OTA update is all that’s needed.

ShepherdPie ,

What if you consider its the software/firmware getting recalled and not the vehicle itself? Then it’s all perfectly cromulent.

twack ,

Because Tesla was fixing significant safety issues without reporting it to the NHTSA in a way that they could track the problems and source of the issue. The two of them got into a pissing match, and the result is that now all OTA’s are recalls. After this, the media realized that “recall” generates more views than “OTA”, and here we are.

Dlayknee ,

I think it’s slightly more nuanced - not all OTAs are recalls, and not all recalls are OTAs (for Tesla). Depending on the issue (for Teslas), the solution may be pushed via an OTA in which case they “issue a recall” with a software update. They’re actually going through this right now. For some other issues though, it’s a hardware problem that an OTA won’t fix so they issue a recall to repair the problem (ex: when the wiring harness for their cameras was fraying the cables).

This is 100% from the NHTSA shenanigans, though.

Shake747 , to technology in Meta takes down Chinese Facebook accounts posing as US military families

This is how I know there’s upvote bots rolling through here.

Who tf on Lemmy commends Facebook…for anything?

betterdeadthanreddit ,

Shit company is capable of doing the right thing once in a while. Now they can go right back to being evil.

SuckMyWang ,

In this instance doing the right thing got them more money. The right or wrong part is irrelevant to them

cloudless ,
@cloudless@feddit.uk avatar

Evil corp vs evil regime. It is fun to watch.

ilickfrogs ,
@ilickfrogs@lemmy.world avatar

lmao chill. A broken clock is still right twice a day.

blahsay ,

Chinese trolls and bots are not your imagination.

Facebook is wildly unpopular on here and you’re still being downvoted

Shake747 ,

Either they’re not Chinese or they don’t look further than 1 comment deep

BertramDitore , to technology in NVIDIA’s new AI chatbot runs locally on your PC
@BertramDitore@lemmy.world avatar

They say it works without an internet connection, and if that’s true this could be pretty awesome. I’m always skeptical about interacting with chatbots that run in the cloud, but if I can put this behind a firewall so I know there’s no telemetry, I’m on board.

halfwaythere ,

You can already do this. There are plenty of vids that show you how and it’s pretty easy to get started. Expanding functionality to get it to act and respond how you want is a bit more challenging. But definitely doable.

nxdefiant , to technology in Waymo issued a recall after two robotaxis crashed into the same pickup truck
furzegulo , to technology in NVIDIA’s new AI chatbot runs locally on your PC

i have no need to talk to my gpu, i have a shrink for that

whodatdair ,

Idk I kinda like the idea of a madman living in my graphics card. I want to be able to spin them up and have them tell me lies that sound plausible and hallucinate things.

femboy_bird ,

Gpu is cheaper (somehow)

gaifux ,

Your shrink renders video frames?

RobotToaster , to technology in NVIDIA’s new AI chatbot runs locally on your PC
@RobotToaster@mander.xyz avatar

Shame they leave GTX owners out in the cold again.

simple ,

deleted_by_author

  • Loading...
  • jvrava9 ,
    @jvrava9@lemmy.dbzer0.com avatar

    Source?

    dojan ,
    @dojan@lemmy.world avatar

    There were CUDA cores before RTX. I can run LLMs on my CPU just fine.

    Steve ,

    There are a number of local AI LLMs that run on any modern CPU. No GPU needed at all, let alone RTX.

    halfwaythere , (edited )

    This statement is so wrong. I have Ollama with llama2 dataset running decently on a 970 card. Is it super fast? No. Is it usable? Yes absolutely.

    Kyrgizion ,

    2xxx too. It’s only available for 3xxx and up.

    CeeBee ,

    Just use Ollama with Ollama WebUI

    anlumo ,

    The whole point of the project was to use the Tensor cores. There are a ton of other implementations for regular GPU acceleration.

    cestvrai , to technology in Waymo issued a recall after two robotaxis crashed into the same pickup truck

    Hmm, so it’s only designed to handle expected scenarios?

    That’s not how driving works… at all. 😐

    wahming ,

    Face it, that’s actually better than many drivers can do

    samus12345 , to technology in Waymo issued a recall after two robotaxis crashed into the same pickup truck
    @samus12345@lemmy.world avatar

    They thought the truck was being driven by Sarah Conner.

    psycho_driver , to technology in Waymo issued a recall after two robotaxis crashed into the same pickup truck

    aaaaand fuck this truck in particular.

    JCreazy , to technology in Waymo issued a recall after two robotaxis crashed into the same pickup truck

    I’m getting tired of implementing technology before it’s finished and all the bugs are worked out. Driverless cars are still not ready for prime time yet. The same thing is happening currently with AI or companies are utilizing it without having any idea what it can do.

    nivenkos ,

    That’s how you get technological advancement.

    Bureaucracy just leads to monopolies and little to any progress.

    LesserAbe ,

    You’re right there should be a minimum safety threshold before tech is deployed. Waymo has had pretty extensive testing (unlike say, Tesla). As I understand it their safety record is pretty good.

    How many accidents have you had in your life? I’ve been responsible for a couple rear ends and I collided with a guard rail (no one ever injured). Ideally we want incidents per mile driven to be lower for these driverless cars than when people drive. Waymos have driven a lot of miles (and millions more in a virtual environment) and supposedly their number is better than human driving, but the question is if they’ve driven enough and in enough varied situations to really be an accurate stat.

    Doof ,

    A slightly tapped a car a first day driving, that’s it. No damage. Not exact a good question.

    Look at how data is collected with self driving vehicles and tell me it’s truly safer.

    LesserAbe ,

    My point asking about personal car incidents is that each of those, like your car tap, show we can make mistakes, and they didn’t merit a news story. There is a level of error we accept right now, and it comes from humans instead of computers.

    It’s appropriate that there are stories about waymo, because it’s new and needs to be scrutinized and proven. Still it would benefit us to read these stories with a critical mind, not to reflexively think “one accident, that means they’re totally unsafe!” At the same time, not accepting at face value information from companies who have a vested interest in portraying the technology as safe.

    Doof ,

    I obviously do since I said look at how the data is collected, what is counted and what is not. Take your own advice and look into that. It’s not this one accident that makes me think it’s unsafe, and certainly not ready to be out there driving.

    LesserAbe ,

    Here’s an article saying that based on data so far, waymo is safer than human drivers. If you have other information on the subject I’d be interested to read it.

    Doof ,

    www.youtube.com/watch?v=pmGOjHi-7MM&t=129s This is a good and entertaining video on it but if you prefer to read here is the sources docs.google.com/document/d/…/edit

    Also your own article “But it’s going to be another couple of years—if not longer—before we can be confident about whether Waymo vehicles are helping to reduce the risk of fatal crashes.”

    corsicanguppy ,

    tired of implementing technology before it’s finished

    That’s is every single programme you’ve ever used.

    Software will be built, sold, used, maintained and finally obsoleted and it will still not be ‘complete’. It will have bugs, sometimes lots, sometimes huge, and those will not be fixed. Our biggest accomplishment as a society may be the case where we patched software on Mars or in the voyager probe still speeding away from earth.

    Self-driving cars, though, don’t need to have perfectly ‘complete’ software, though; they just need to work better than humans. That’s already been accomplished, long ago.

    And with each fix applied to every one of them, it’s a situation they all shouldn’t ever repeat. Can we say the same about humans? I can’t even get my beautiful, stubborn wife to slow down, leave more space, and quit turning the steering wheel in that rope-climbing way like a farmer on a tractor does (because the airbag will take her hand off).

    dsemy ,

    That’s is every single programme you’ve ever used.

    No software is perfect, but anybody who uses a computer knows that some software is much less complete. This currently seems to be the case when it comes autonomous driving tech.

    And with each fix applied to every one of them, it’s a situation they all shouldn’t ever repeat.

    First, there are many companies developing autonomous driving tech, and if there’s one thing tech companies like to do is re-invent the wheel (ffs Tesla did this literally). Second, have you ever used modern software? A bug fix guarantees nothing. Third, you completely ignore the opposite possibility - what if they push a serious bug in an update, which drives you off a cliff and kills you? It doesn’t matter if they push a fix 2 hours later (and let’s be honest, many of these cars will likely stop getting updates pretty fast anyway once this tech gets really popular, just look at the state of software updates in other industries).

    daed ,

    I understand your issue with these cars - they’re dangerous, and could kill people with incomplete or buggy software. I believe the person you are responding to was pointing out that even with the bugs, these are already safer than human drivers. This is already better when looking at data rather than headlines and going off of how things seem.

    Personally, I would prefer to be in control of the vehicle at all times. I don’t like the idea of driverless tech either.

    redfox ,

    Well, has anyone done good statistics to show all the self driving cars are more dangerous than regular distracted humans as a whole?

    We can always point to numerous self driving car errors and accidents, but I am under the impression that compared to the number of accidents involving people on a daily basis, self driving cars might be safer even now?

    I’m thinking of how many crashes took place in the time it took me to type this out. I’m also curious about the fatality rate between self or assisted driving vs not.

    I think we tend to be super critical of new things, especially tech things, which is understandable and appropriate, but it would be nice to see some holistic context. I wish government regulators would publish that data for us, to help us form informed opinions instead of having to rely on manufacturers (conflict of interest) or journalists who need a good story to tell, and some clicks.

    dsemy ,

    Currently there are many edge cases which haven’t even been considered yet, so maybe statistically it is safer, but it doesn’t change anything if your car makes a dumb mistake you wouldn’t have and gets you into an accident (or someone else’s car does and they don’t stop it cause they weren’t watching the road).

    nooeh ,

    How will they encounter these edge cases without real world testing?

    JCreazy ,

    Fair point

    drivepiler ,

    I agree, but testing with a supervisory driver should be required in case of emergency situations. Both safer and creates job opportunities.

    long_chicken_boat ,

    I’m against driverless cars, but I don’t think this type of errors can be detected in a lab environment. It’s just impossible to test with every single car model or real world situations that it will find in actual usage.

    An optimal solution would be to have a backup driver with every car that keeps an eye on the road in case of software failure. But, of course, this isn’t profitable, so they’d rather put lives at risk.

    tonyn , to technology in Waymo issued a recall after two robotaxis crashed into the same pickup truck

    That pickup truck was asking for it I tell ya. He was looking at me sideways, he was.

    postmateDumbass ,

    It said RAM om the side!

    waterSticksToMyBalls ,

    Brb gonna dazzle paint my car

    Chozo , (edited ) to technology in Waymo issued a recall after two robotaxis crashed into the same pickup truck

    After an investigation, Waymo found that its software had incorrectly predicted the future movements of the pickup truck due to “persistent orientation mismatch” between the towed vehicle and the one towing it.

    Having worked at Waymo for a year troubleshooting daily builds of the software, this sounds to me like they may be trying to test riskier, "human" behaviors. Normally, the cars won't accelerate at all if the lidar detects an object in front of it, no matter what it thinks the object is or what direction it's moving in. So the fact that this failsafe was overridden somehow makes me think they're trying to add more "What would a human driver do in this situation?" options to the car's decision-making process. I'm guessing somebody added something along the lines of "assume the object will have started moving by the time you're closer to that position" and forgot to set a backup safety mechanism for the event that the object doesn't start moving.

    I'm pretty sure the dev team also has safety checklists that they go through before pushing out any build, to make sure that every failsafe is accounted for, so that's a pretty major fuckup to have slipped through the cracks (if my theory is even close to accurate). But luckily, a very easily-fixed fuckup. They're lucky this situation was just "comically stupid" instead of "harrowing tragedy".

    GiveMemes ,

    Get your beta tests off my tax dollar funded roads pls. Feel free to beta test on a closed track.

    Chozo ,

    They've already been testing on private tracks for years. There comes a point where, eventually, something new is used for the first time on a public road. Regardless, even despite even idiotic crashes like this one, they're still safer than human drivers.

    I say my tax dollar funded DMV should put forth a significantly more stringent driving test and auto-revoke the licenses of anybody who doesn't pass, before I'd want SDCs off the roads. Inattentive drivers are one of the most lethal things in the world, and we all just kinda shrug our shoulders and ignore that problem, but then we somehow take issue when a literal supercomputer on wheels with an audited safety history far exceeding any human driver has two hiccups over the course of hundreds of millions of driven miles. It's just a weird outlook, imo.

    fiercekitten ,

    People have been hit and killed by autonomous vehicles on public streets due to bad practices and bad software. Those cases aren’t hiccups, those are deaths that shouldn’t have happened and shouldn’t have been able to happen. If a company can’t develop its product and make it safe without killing people first, then it shouldn’t get to make the product.

    Chozo ,

    People have been hit and killed by human drivers at much, much higher rates than SDCs. Those aren't hiccups, and those are deaths that shouldn't have happened, as well. The miles driven per collision ratio between humans and SDCs aren't even comparable. Human drivers are an order of magnitude more dangerous, and there's an order of magnitude more human drivers than SDCs in the cities where these fleets are deployed.

    By your logic, you should agree that we should be revoking licenses and removing human drivers from the equation, because people are far more dangerous than SDCs are. If we can't drive safely without killing people, then we shouldn't be licensing people to drive, right?

    fiercekitten ,

    I’m all for making the roads safer, but these companies should never have the right to test their products in a way that gets people killed, period. That didn’t happen in this article, but it has happened, and that’s not okay.

    Chozo ,

    People shouldn't drive in a way that gets people killed. Where's the outrage for the problem that we've already had for over a century and done nothing to fix?

    A solution is appearing, and you're rejecting it.

    ShepherdPie ,

    Whose been killed by autonomous vehicles?

    DoomBot5 ,

    Full releases have plenty of bugs.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines