There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

@QuadratureSurfer@lemmy.world avatar

QuadratureSurfer

@[email protected]

This profile is from a federated server and may be incomplete. Browse more on the original instance.

QuadratureSurfer , to technology in A new NES emulator was briefly available on the Apple App Store
@QuadratureSurfer@lemmy.world avatar

I mean, the most that Nintendo would do is send a cease and desist…

I doubt they would go straight to filing court documents. The cease and desist is meant to save time and costs for them and even then they still haven’t officially filed anything in court.

But I understand not even wanting to get on the radar of a big corporation like that.

QuadratureSurfer , to technology in How to Temporarily Bypass Discord ToS Update
@QuadratureSurfer@lemmy.world avatar

Also when it comes to choosing an arbitrator, we should be able to choose an unbiased arbitrator, rather than the one that is being paid by the company we have a dispute with.

There’s a lawyer that goes into detail on this from time to time: www.youtube.com/watch?v=K0iXFnGMD48&t=702s

QuadratureSurfer , to technology in How to Temporarily Bypass Discord ToS Update
@QuadratureSurfer@lemmy.world avatar

I think you missed the part about forced arbitration (if you’re a US Resident).

And the most important part:

You can decline this agreement to arbitrate by emailing an opt-out notice to [email protected] within 30 days of April 15, 2024 or when you first register your Discord account, whichever is later; otherwise, you shall be bound to arbitrate disputes in accordance with the terms of these paragraphs.

QuadratureSurfer , to android in YouTube is finally cracking down on third-party apps like ReVanced
@QuadratureSurfer@lemmy.world avatar

You’re not going to find everything you want outside of YouTube just yet. But apps like GrayJay make it a lot easier to start transitioning to other platforms.

It combines most all of the video sharing platforms in one. From there you can start following creators if they have their channel setup on different platforms while still being able to keep your main home feed coming from YouTube if you wanted to.

QuadratureSurfer , to videos in SM64's Invisible Walls Explained Once and for All
@QuadratureSurfer@lemmy.world avatar

Just skip ahead to 29:45 and you finally get to the actual explanation of various invisible walls. Time-linked here: youtu.be/YsXCVsDFiXA&t=1785s

QuadratureSurfer , to technology in Apple is reportedly planning a big AI-focused M4 Mac upgrade
@QuadratureSurfer@lemmy.world avatar

Well that’s a loaded question.

There are probably some websites that let you try out the model while they run it on their own equipment (or have it rented out through Amazon, etc.). But the biggest advantage to these models is being able to run it locally if you have the hardware to handle it (beefy GPU for quicker responses and a lot of RAM).

To quickly answer your question, you can download the model from here:
huggingface.co/…/dolphin-2.5-mixtral-8x7b-GGUF
I would recommend Q5_K_M.

But you’ll also need some software to run it.

A large number of users are using “Text-Generation-WebUI” github.com/oobabooga/text-generation-webui
There’s also “LM Studio” lmstudio.ai
Ollama github.com/ollama/ollama
And more.

I know that LM Studio supports Both NVIDIA and AMD GPUs.
Text-Generation-WebUI can support AMD GPUs as well, it just requires some additional setup to get it working.

Some things to keep in mind…
Hardware requirements:

  • RAM is the biggest limiting factor with which model you can run while your GPU/CPU will decide how quickly the LLM can respond.
  • If you can fit the entire model inside of your GPU’s VRAM you’ll get the most speed. In this case I would suggest using a GPTQ model instead of GGUF huggingface.co/…/dolphin-2.5-mixtral-8x7b-GPTQ
  • Even the newest consumer grade GPUs only have 24GB of VRAM right now (RTX 4090, RTX 3090, and RX 7900 XTX). And the next generation of Consumer GPUs are looking like they will be capped at 24GB of VRAM as well unless AMD decides this is their way of competing with NVIDIA.
    GGUF models let you compensate for VRAM limitations by loading the model first in VRAM and anything leftover will get loaded into system RAM.

Context Length: Think of an LLM like something that only has a fixed amount of short term memory. The bigger you set the context length, the more short term memory you can give it (the maximum length you can set depends on the model you’re using and setting it to the max also requires more RAM). Mixtral 8x7B models have a Max context length of 32k.

QuadratureSurfer , to technology in Apple is reportedly planning a big AI-focused M4 Mac upgrade
@QuadratureSurfer@lemmy.world avatar

Do the majority of users really want AI in their computers?

What this could mean is the ability to replace (or upgrade) something like Siri into a model that runs locally on your machine. This means that it wouldn’t need to route your questions/requests through someone else’s computer (the cloud). You wouldn’t even need to connect the computer to the internet and you would still be able to work with that model.

Besides, there are many companies that don’t want you to pass on their internal documents to companies like OpenAI (ChatGPT). With locally run models, there aren’t any problems with this as that data will not be uploaded anywhere.

QuadratureSurfer , to technology in Apple is reportedly planning a big AI-focused M4 Mac upgrade
@QuadratureSurfer@lemmy.world avatar

Depends on your work, what you’re trying to do, and how you use it.

As a developer I run my own local version of Dolphin Mixtral 8x7B (LLM) and it’s great at speeding up my productivity. I’m not asking for it to do everything all at once but usually just small snippets here and there to see if there’s a better or more efficient way.

I, for one, am looking forward to hardware improvements that can help us run larger models, so news like this is very welcome.

But you are correct, a large number of companies misunderstand how to use this technology when they should really be treating it like someone at an intern level.

It’s great to give small and simple (especially repetitive) tasks, but you’ll still need to verify everything.

QuadratureSurfer , to technology in A Breakthrough Online Privacy Proposal Hits Congress
@QuadratureSurfer@lemmy.world avatar

…senate.gov/…/committee-chairs-cantwell-mcmorris-…

Scroll all the way to the bottom and it’s a link to a PDF…

Or direct link here: …senate.gov/…/3F5EEA76-5B18-4B40-ABD9-F2F681AA965…

And an easier to read summary of each section here: …senate.gov/…/E7D2864C-64C3-49D3-BC1E-6AB41DE863F…

QuadratureSurfer , to technology in Elon Musk's X pushed a fake headline about Iran attacking Israel. X's AI chatbot Grok made it up.
@QuadratureSurfer@lemmy.world avatar

Well if you read OpenAI’s terms of service, there’s an indemnification clause in there.

Basically if you get ChatGPT to say something defaming/libellous and then post it, you would foot the legal bill for any lawsuits that may arise from your screenshot of what their LLM produced.

QuadratureSurfer , to technology in Google might make users pay for AI features in search results
@QuadratureSurfer@lemmy.world avatar

Ah, sorry, I didn’t realize that there was an ad-blocker that didn’t block the premium prompt.

QuadratureSurfer , to technology in Google might make users pay for AI features in search results
@QuadratureSurfer@lemmy.world avatar

You don’t use any adblockers on YouTube?

QuadratureSurfer , to technology in Google might make users pay for AI features in search results
@QuadratureSurfer@lemmy.world avatar

It’s been a long time since I’ve seen that… but I mostly watch YouTube through Grayjay, or on a browser with adblocking enabled.

QuadratureSurfer , to technology in The largest campaign ever to stop publishers destroying games
@QuadratureSurfer@lemmy.world avatar

I’m gonna start spreading this to old subreddits and forums I used to frequent where games have been shut down.

QuadratureSurfer , to technology in The BBC Won't Use AI to Promote Doctor Who Again After Being Yelled at by Fans
@QuadratureSurfer@lemmy.world avatar

The best way to handle LLMs is to treat them like an intern. They’re useful and can get a lot of work done, but you need to double check their work.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines