The fleet of cars is summoned back to the HQ to have the update installed, so it causes a temporary service shutdown until cars are able to start leaving the garage with the new software. They can't do major updates over the air due to the file size; pushing out a mutli-gigabyte update to a few hundred cars at once isn't great on the cellular network.
What typically happens when a recall is issued for other vehicles? Don’t they either remove and replace the bad part or add extra parts to fix the issue?
How is removing bad code and replacing it with good code or just adding extra code to fix the issue any different?
Here’s an example of why I don’t like that they’re called recalls when it’s just a system update, if you have a recall on a food item, is there some way to fix it aside from taking it back (to be replaced) or throwing it away?
When there’s a security patch released on your phone, do we call it a recall on the phone? Or is that reserved for when there a major hardware defect (like the Samsung Note fiasco)
Because Tesla was fixing significant safety issues without reporting it to the NHTSA in a way that they could track the problems and source of the issue. The two of them got into a pissing match, and the result is that now all OTA’s are recalls. After this, the media realized that “recall” generates more views than “OTA”, and here we are.
I think it’s slightly more nuanced - not all OTAs are recalls, and not all recalls are OTAs (for Tesla). Depending on the issue (for Teslas), the solution may be pushed via an OTA in which case they “issue a recall” with a software update. They’re actually going through this right now. For some other issues though, it’s a hardware problem that an OTA won’t fix so they issue a recall to repair the problem (ex: when the wiring harness for their cameras was fraying the cables).
That was an annoying read. It doesn’t say what this actually is.
It’s not a new LLM. Chat with RTX is specifically software to do inference (=use LLMs) at home, while using the hardware acceleration of RTX cards. There are several projects that do this, though they might not be quite as optimized for NVIDIA’s hardware.
Go directly to NVIDIA to avoid the clickbait.
Chat with RTX uses retrieval-augmented generation (RAG), NVIDIA TensorRT-LLM software and NVIDIA RTX acceleration to bring generative AI capabilities to local, GeForce-powered Windows PCs. Users can quickly, easily connect local files on a PC as a dataset to an open-source large language model like Mistral or Llama 2, enabling queries for quick, contextually relevant answers.
Pretty much every LLM you can download already has CUDA support via PyTorch.
However, some of the easier to use frontends don’t use GPU acceleration because it’s a bit of a pain to configure across a wide range of hardware models and driver versions. IIRC GPT4All does not use GPU acceleration yet (might need outdated; I haven’t checked in a while).
If this makes local LLMs more accessible to people who are not familiar with setting up a CUDA development environment or Python venvs, that’s great news.
Ooh nice. Looking at the change logs, looks like they added Vulkan acceleration back in September. Probably not as good as CUDA/Metal on supported hardware though.
In a blog post, Waymo has revealed that on December 11, 2023, one of its robotaxis collided with a backwards-facing pickup truck being towed ahead of it. The company says the truck was being towed improperly and was angled across a center turn lane and a traffic lane.
See? Waymo robotaxis don’t just take you where you need to go, they also dispense swift road justice.
I still don’t understand how these are allowed. One is not allowed to let a Tesla drive without being 100% in control and ready to take the wheel at all times, but these cars are allowed to drive around autonomously?
If I am driving my car, and I hit a pedestrian, they have legal recourse against me. What happens when it was an AI or a company or a car?
The company is at fault. I don’t think there’s laws currently in place that say a vehicle has to be manned on the street, just that it uses the correct signals and responds correctly to traffic, but I may be wrong. It may also be local laws.
You have legal recourse against the owner of the car, presumably the company that is profiting from the taxi service.
You see these all the time in San Francisco. I’d imagine the vast majority of the time, there are no issues. It’s just going to be big headlines whenever some accident does happen.
Nobody seems to care about the nearly 50,000 people dying every year from human-caused car accidents
Nobody seems to care about the nearly 50,000 people dying every year from human-caused car accidents
I would actually wager that’s not true, it’s just that the people we elect tend to favor the corporations and look after their interests moreso than the people who elected them, so we end up being powerless to do anything about it.
sure, but why do these accidents caused by AI drivers get on the news consistently and yet we rarely see news about human-caused accidents? it’s because news reports what is most interesting - not exactly accurate or representative of the real problems of the country
“made contact” “towed improperly”. What a pathetic excuse. Wasn’t the entire point of self driving cars the ability to deal with unpredictable situations? The ones that happen all the time every day?
Considering the driving habits differ from town to town, the current approaches do not seem to be viable for the long term anyway.
It’s a rare edge case that slipped through because the circumstances to cause it are obscure, from the description it was a minor bump and the software was updated to try and ensure it doesn’t happen again - and it probably won’t.
Testing for things like this is difficult but looking at the numbers from these projects testing is going incredibly well and we’re likely to see moves towards legal acceptance soon
engadget.com
Oldest