There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

This profile is from a federated server and may be incomplete. Browse more on the original instance.

People share misinformation because of social media’s incentives — but those can be changed (www.niemanlab.org)

People share misinformation because of social media’s incentives — but those can be changed::“After a few tweaks to the reward structure of social media platforms, users begin to share information that is accurate and fact-based.” (Though the tweaks involved paying people to do so.)

j4k3 ,
@j4k3@lemmy.world avatar

Engagement and the bottom line

Maintaining high levels of user engagement is crucial for the financial model of social media platforms. Attention-getting content keeps users active on the platforms. This activity provides social media companies with valuable user data for their primary revenue source: targeted advertising.

Or maybe it doesn’t. Maybe exploiting the public commons is simply subhuman shittery. Targeted advertising is stalking.

One thing is for sure, featuring and amplifying shitty people makes shitty places. The new Photon version of Lemmy does an unexpectedly fantastic job at managing sludge.

Is there a way to run old bare metal hardware on LAN for a dedicated computing task like AI?

This is an abstract curiosity. Let’s say I want to use an old laptop to run a LLM AI. I assume I would still need pytorch, transformers, etc. What is the absolute minimum system configuration required to avoid overhead such as schedulers, kernel threads, virtual memory, etc. Are there options to expose the bare metal and use a...

j4k3 OP ,
@j4k3@lemmy.world avatar

Seems like avoiding context switching and all the overhead associated would make a big difference when pretty much everything in cache is critical data.

I’m more curious about running something like Bark TTS where the delay is not relevant, but it would be cool to have the instructional clarity of the Halo Master Chief voice read me technical documentation, or test the effects of training my own voice, tuned to how I hear it, reading me stuff I find challenging. If the software is only able to process around 10 seconds at a time, just script it and let it run. The old machine will just collect dust otherwise.

Anyways, what’s the best scheduler with affinity/isolation/pinning?

j4k3 ,
@j4k3@lemmy.world avatar

White list firewall. Because this is the real reason everyone has a right to ad block. Ads are hidden links to other websites. It’s like walking through a gauntlet of pick pockets bribing the credit card company just to make it to the checkout at your local grocery store, or some asshole you invite into your home that goes to the bathroom, opens a window, and lets a dozen random people in your home if they pay a dollar for the access. The entire system is based on stalking people. It is criminal.

j4k3 ,
@j4k3@lemmy.world avatar
  • Anton Petrov
  • Geology Hub
  • PBS SpaceTime
  • Sabine Hossenfelder
  • Fraiser Cain
  • Mentor Pilot
  • Told in Stone
  • Kings and Generals
  • Applied Science
  • Broken Taps
  • This old Tony
  • Huygens Optics
  • Two minute papers

Anyone know of a reverse command script or package to parse args for flags, expand them, and condense a man or help page(s) to just the relevant flags?

I have been working on my scripts for user/group permissions today. This idea has been on my back burner for awhile. I’m sure others have done this before. I just haven’t encountered them yet....

j4k3 ,
@j4k3@lemmy.world avatar

Coming from 2 years of Silverblue and now trying Workstation for the last month, get super comfortable with making tools with distrobox before going to Silverblue. Once you try SB, don’t waste any time with the native toolbox system. Layer on distrobox and just use it from the get go. You can’t upgrade distros that are used to make toolbox containers, so don’t waste time building anything you want to keep or maintain using toolbox. Distrobox is orders of magnitude more capable and even more orders of magnitude better for documentation and features. Using toolbox will leave you frustrated and looking for ways to use podman commands, and that leads to infuriating documentation that is only written for advanced Docker users making the transition to Podman. I ended up layering a lot of stuff on my base build. Like toolbox does not have access to /dev by default so messing with Arduino/embedded stuff is a pain, and there is no documentation or flag options available in SB for how to deal with this issue. Overall SB worked okay for me, but I probably learned less and progressed slower than I would have if I was not using SB and had just used Workstation. That said, I am probably going to wipe my current setup and start over soon. My Workstation build is already an untenable mess.

j4k3 ,
@j4k3@lemmy.world avatar

I can only hear Geoffrey Rush’s captain Barbossa…"Whores and Scoundrels… "

How do you containerize stuff you install from source in a way that you can completely remove later?

I’m doing a bunch of AI stuff that needs compiling to try various unrelated apps. I’m making a mess of config files and extras. I’ve been using distrobox and conda. How could I do this better? Chroot? Different user logins for extra home directories? Groups? Most of the packages need access to CUDA and localhost. I would...

j4k3 OP ,
@j4k3@lemmy.world avatar

I have read up on it some, but Fedora does UEFI, secure boot, and a self compiling Nvidia driver that gets built for each kernel update so well that I hesitate to leave. I tried installing the NIX package manager on fedora, but having a user owned directory folder mounted in root is the ugliest thing I’ve ever seen and immediately removed it.

j4k3 OP ,
@j4k3@lemmy.world avatar

By default it is just the packages and dependencies that are removed. Your /home/user/ directory is still mounted just the same. This puts all of your config and dot files in all the normal places. If you install another distro like Arch on a Fedora base, it also installs all of the extra root package locations for arch and these get left on the host system after removing the distrobox instance. So yeah it still makes a big mess.

j4k3 OP ,
@j4k3@lemmy.world avatar

I need to explore this BTRFS feature, I just don’t have a good place or reason to start Dow that path yet. I’ve been on Silverblue for years, but decided to try Workstation for now. Someone in the past told me I should have been using BTRFS for FreeCAD saves, but I never got around to trying it.

j4k3 OP ,
@j4k3@lemmy.world avatar

Unfortunately, the UEFI on my laptop doesn’t allow custom keys. I can disable secure boot. I can make and place custom keys, but it never switches over from the initial unprotected state to whatever they call that transition state after the new custom PK key is added. Once all the custom keys are configured and I try to reinstate secure boot in the bios, it flushes the custom keys and recreates a new set automatically using the secret Trusted Protection Module key(s) built into the hardware.

Since my initial failure trying to add custom keys, I’ve come across Sakaki’s old UEFI guide on Gentoo and noted the possibility of maybe installing keys with the EFI KeyTool to boot into EFI directly, but I have not tried it.

Do you mean the /nix directory?

Yeah. I wanted to try a flake I came across for an AI app I was having trouble compiling on my own. The flake was setup for a Linux/Windows subsystem though. I tried to install the Nix package manager as a single user because I don’t want some extra daemon running or anything that has such an elaborate uninstall as the multiuser Nix lists. At least it is too much to deal with for a short term goal of installing a single app. After installing the single user Nix pm, the flake I was trying to use wasn’t listed in the Nix repo and reconfiguring what was already setup looked like a waste of time.

In general I would rather have my entire root directory locked down. I don’t really know the real world implications of having a user owned directory in my root file system. It just struck me as too strange to overlook, and it is far too deep into a rabbit hole for my goal that had proved fruitless already. I searched for several of the tools I’ve had to compile on my own, and none of them were listed in Nix. There are a couple on the AUR but no distro seems to do FOSS AI yet.

I’ve already been burned by running Arch natively years ago. I dropped it and installed Gentoo which I ran for a few months before switching to Silverblue because I didn’t really have the scripting skills to make Gentoo work for me at the time. I’m very weary of any elitist rhetoric about any distro now. When I see stuff like ‘Nix is a language you just learn/only for power users,’ I have flashbacks of dozens of tabs open in the arch wiki, back when I learned what a fractal link chasm is, and all those times I had to actually use backups to function with Arch; the only time I’ve ever needed to restore backups in my life. At this point, I think I’m on the edge of transitioning from an intermediate to “power user” after ten years on Linux exclusively, but I know there is a lot of stuff I do not grasp yet. An operating system shouldn’t be a project I need to actively manage, maintain, or stop what I am working on for a random tangential deep dive just to use. I can’t tell what Nix is like in practice. The oddity of the package manager does not inspire confidence, but I’m admittedly skeptical by default. I see how dependencies are handled better in Nix in some ways, but I do not have infinite storage for a bunch of redundant copies of everything just for every obscure package on my base system. I’m not clear about how Nix does configs and dot files “better” than my present situation. The lack of a deep dive into UEFI security and details about the ability of Nix to coexist in a system with a separate Windows drive (because laptop has a few configuration elements only available in Windows), I haven’t tried Nix OS.

j4k3 OP ,
@j4k3@lemmy.world avatar

Is the Nix learning curve like Arch’s f.u. user-go cry to rsync/CS Masters expectations, or like Gentoo’s tl;dr approach of “our packagers know how to create sane defaults and include key info you need for sound decisions” approach. I never want to deal with another distro that randomly dumps me into an enormous subject to read because they made a change in a dependency that requires me to manually intervene in a system update, or any OS that makes basic FOSS tools like gimp and FreeCAD tedious.

j4k3 OP ,
@j4k3@lemmy.world avatar

Those Flatpak configs are not quite as scattered, most are in .config .var or .local. Most Flatpaks leave junk behind in these directories. I just deleted a few today. A lot of the problems start happening when you need to compile stuff where each package has the same dependency but a different version of the dep in each one. Then you have a problem and need to track down some related library that is not in the execution path and suddenly there are 10 copies of a dozen files all related to the stupid thing on your system and scattered all over the place. It becomes nearly impossible to track down which file is related to the container with the problem.

This is only an issue if you find yourself playing in software that is not yet supported directly my any packagers for Linux distros; stuff like FOSS AI right now.

j4k3 OP ,
@j4k3@lemmy.world avatar

Thanks for the read. This is what I was thinking about trying but hadn’t quite fleshed out yet. It is right on the edge of where I’m at in my learning curve. Perfect timing, thanks.

Do you have any advice when the packages are mostly python based instead of makefiles?

j4k3 OP ,
@j4k3@lemmy.world avatar

Python, in these instances, is being used as the installer script. As far as I can tell it involves all of the same packaging and directory issues as what make is doing. Like, most of the packages have a Python startup script that takes a text file and installs everything from it. This usually includes a pip git+address or two. So far, just getting my feet wet to try out AI has been enough for me to overlook what all is happening behind the curtain. The machine is behind an external whitelist firewall all by itself. I am just starting to get to the point where I want to dial everything in so I know exactly what is happening.

I’ve noticed a few oddball times during installations pip said something like “package unavailable; reverting to base system.” This was while it is inside conda, which itself is inside a distrobox container. I’m not sure what “base system” it might be referring to here or if this is something normal. I am probing for any potential gotchas revolving around python and containers. I imagine it is still just a matter of reading a lot of code in the installation path.

j4k3 OP ,
@j4k3@lemmy.world avatar

Wait. Does emerge support building packages natively when they are not from Gentoo?

Most of the stuff I’m messing with is mixed repos with entire projects that include binaries for the LLMs, weights, and such. Most of the “build” is just setting up the python environment with the right dependency versions for each tool. The main issues are the tools and libraries like transformers, pytorch, and anything that interacts with CUDA. These get placed all over the file system for each build.

j4k3 OP ,
@j4k3@lemmy.world avatar

Do you happen to know what distrobox options there are for extra root directories associated with other distro containers, if there is an effective option to separate these, or if this is part of the remote “home” mount setting? I tried installing an Arch container on a fedora base system. Distrobox automatically built various Arch root directories even though the container should have been rootless.

j4k3 ,
@j4k3@lemmy.world avatar

You nailed the 3rds rule on so many levels here. Nice shot

Looking for beginner friendly free or cheaper solutions for a chatbot

Hello all, you all seem to be well versed in this stuff and I can’t seem to find many ai or chatbot communities at all. Anyway, I’ve just been using the free, outdated gpt3.5 on the openai site and it really opened my eyes to how useful of a tool it is. I mean it helps me with everything, especially computer related and...

j4k3 ,
@j4k3@lemmy.world avatar

Originally posted this to beehaw on another account:

Oobabooga is the main GUI used to interact with models.

github.com/oobabooga/text-generation-webui

FYI, you need to find checkpoint models. In the available chat models space, naming can be ambiguous for a few reasons I’m not going to ramble about here. The main source of models is Hugging Face. Start with this model (or get the censored version):

huggingface.co/…/llama2_7b_chat_uncensored-GGML

First, let’s break down the title.

  • This is a model based in Meta’s Llama2.
  • This is not “FOSS” in the GPL/MIT type of context. This model has a license that is quite broad in scope with the key point stipulating it can not be used commercially for apps that have more than 700 million users.
  • Next, it was quantized by a popular user going by “The Bloke.” I have no idea who this is IRL but I imagine this is a pseudonym or corporate alias given how much content is uploaded by this account on HF.
  • This model is based on a 7 Billion parameter dataset, and is fine tuned for chat applications.
  • This is uncensored meaning it will respond to most inputs as best it can. It can get NSFW, or talk about almost anything. In practice there are still some minor biases that are likely just over arching morality inherent to the datasets used, or it might be coded somewhere obscure.
  • Last part of the title is that this is a GGML model. This means it can run on CPU or GPU or a split between the two.

As for options on the landing page or "model card"

  • you need to get one of the older style models that have “q(numb)” as the quantization type. Do not get the ones that say “qK” as these won’t work with the llama.cpp file you will get with Oobabooga.
  • look at the guide at the bottom of the model card where it tells you how much ram you need for each quantization type. If you have a Nvidia GPU with the CUDA API, enabling GPU layers makes the model run faster, and with quite a bit less system memory from what is stated on the model card.

The 7B models are about like having a conversation with your average teenager. Asking technical questions yielded around 50% accuracy in my experience. A 13B model got around 80% accuracy. The 30B WizardLM is around 90-95%. I’m still working on trying to get a 70B running on my computer. A lot of the larger models require compiling tools from source. They won’t work directly with Oobabooga.

j4k3 ,
@j4k3@lemmy.world avatar

There may be other out of the box type solutions. This setup really isn’t bad. You can find info on places like YT that are step by step for Windows.

If you are at all interested in learning about software and how to get started using a command line, this would be a good place to start.

Oobabooga is well configured to make installation easy. It just involves a few commands that are unlikely to have catastrophic errors. All of the steps required are detailed in the README.md file. You don’t actually need to know or understand everything I described in the last message. I described why the model is named like x/y/z if you care to understand. This just explained details I learned by making lots of mistakes. The key here is that I linked to the model you need specifically and tried to explain how to choose the right file from the linked model. If you still don’t understand, feel free to ask. Most people here remember what it was like to learn.

j4k3 , (edited )
@j4k3@lemmy.world avatar

First of all, have you replicated the actual white paper tests with identical methodology?

If you use a different setup, such as easy setups with popular webui’s or check point models, or models with a different quantization method, you are going to see different performance, likely drastically different. Most of these models are meant to run at float 64 where even a small model like a 7B will require enterprise class hardware to run.

The number of parameters is like a total vocabulary. Indeed a larger model has a bigger chance of having whatever specialization you are looking to implement. The smaller models will require fine tuning on any niche subject of interest.

Honestly, go play with Stable Diffusion and images for a while as an exercise in this area of how models work. Look at how textual inversion, loras, lyconis, and prompts work in detail. Try prompting without specialized fine tuning and with. Try some fine tuning of your own. Stable diffusion is much more accessible in this area. Go get several checkpoints with various sizes and styles. This will teach you a ton about what is possible with fine tuning and small checkpoints. It is far more accessible and the results are much more clear to see.

As far as models with technical accuracy. Out of the box, the WizardLM 30B GGML with the largest quantization size you can work with in system memory is likely your best option.

j4k3 ,
@j4k3@lemmy.world avatar

The bootloader functionality is the main thing you really want to know but is hard to find out in most cases. If you can find a machine that accepts custom keys with secure boot you’re better off. There are methods that enable secure boot without the ability to add custom keys, but this involves special 3rd party keys signed by Microsoft. It also makes kernel mods a pain if not impossible. The only machines you can fully control are those that can accept custom keys.

There is an excellent guide that describes every aspect of this, including the attack vectors, vulnerabilities, and peripheral uses of the system. It is from the US government here: …defense.gov/…/CTR-UEFI-Secure-Boot-Customization…

The only other reference I have found with additional information is from a Gentoo guide that describes how to boot into the UEFI system and make changes directly. This may be an option if you can’t alter secure boot.

wiki.gentoo.org/wiki/…/Configuring_Secure_Boot

Again, this only really applies to modern hardware with secure boot, and only in instances where you may need to run custom kernels or modules other than those that come presigned by distro packagers using Microsoft’s 3rd party key.

j4k3 ,
@j4k3@lemmy.world avatar

I don’t mind mine. It works fine in Fedora, but I only use it for CUDA/AI stuff and no gaming. I probably could game, but haven’t cared to go down into that money pit yet.

I screwed up and followed outdated advice and guides for my initial install and config. That broke the proprietary driver after the first kernel update. After reading the official Fedora documentation, I now have the self compiling kernel driver that automatically updates itself after ever kernel change.

As far as AI, a laptop with a 3080Ti with 16GBV is quite capable. There is nothing else that comes close to that much VRAM in a mobile device.

j4k3 ,
@j4k3@lemmy.world avatar

No mention of the most important detail in the article. Did they request to control/replace the account or did the do as directed by the head twit and X the account. X is no better than the nonsense Trump tried to make, MyMoscow, FaceFascist, Brighton, or whatever it was called. It’s the top pick from the Taliban though, so there’s a win for the blood emerald African space Karen.

j4k3 ,
@j4k3@lemmy.world avatar

Back in the day, the library was much much more essential. It was the only access to advanced knowledge. I remember driving (riding) long distances to get to a better library for school projects when I was a kid. It was like the required Odyssey one did just to find an address to an outdated article that half mentioned what you needed but had no citations. It is hard to believe how isolated information was back 30 years ago. Big libraries were like a religious holy site of opportunity back then, choir, clouds parting, golden rays, Morgan Freeman, and all.

Hollywood’s Fight Against A.I. Will Affect Us All: Screenwriters, actors, authors, and artists are fighting to ensure that human beings are not shunted to the margins of our culture. (newrepublic.com)

Hollywood’s Fight Against A.I. Will Affect Us All: Screenwriters, actors, authors, and artists are fighting to ensure that human beings are not shunted to the margins of our culture.::Screenwriters, actors, authors, and artists are fighting to ensure that human beings are not shunted to the margins of our culture.

j4k3 ,
@j4k3@lemmy.world avatar

Mycroft, generate me an action movie that is a cross between Mission Impossible and The Matrix where the characters only have 5 fingers on each hand, and their navel and face are oriented on the same side of the body.

Mycroft, generate me an action movie that is a cross between Mission Impossible and The Matrix where the characters only have 5 fingers on each hand, their navel and face are oriented on the same side of the body, and their elbows and knees have correct orientation.

How did I get to tentacle porn. Like seriously…

A group of researchers said they have found a way to hack the hardware underpinning Tesla’s infotainment system, allowing them to get what normally would be paid upgrades — such as heated rear seats (techcrunch.com)

The researchers will present their research next week at the Black Hat cybersecurity conference in Las Vegas....

j4k3 ,
@j4k3@lemmy.world avatar

This bot needs a conditional check implemented to count the number of lines in the original post’s comment, then only post if the length is shorter than some arbitrary value or percentage of its own results.

Is the Fediverse (or thrediverse specifically) going to follow the way of the old web?

People using their communities to link to like communities because the fediverse/thrediverse is so vast and it’s not as easy to navigate? Maybe, eventually finding a way to consistently open links to other instances/services in a way that opens it in your preferred instance/service (some sort of protocol or extension). Our...

j4k3 ,
@j4k3@lemmy.world avatar

I think we’ll eventually need some kind of suggested content feed available for a low effort method to expand interests. I have been using the all feed to discover new stuff but I haven’t subscribed to new stuff using All in the last couple of weeks. I usually search a few times within subjects I’m actively pursuing at any given time, but I am unlikely to search for the more peripherally entertaining type of content that I may really appreciate if it is suggested based on peer networking only.

j4k3 ,
@j4k3@lemmy.world avatar

Not as much screen size difference as overall weight. Pixel 4 was good. Pixel 6 is a lead weight trying to get your pants off.

j4k3 ,
@j4k3@lemmy.world avatar

The big thing now is Graphene OS on the Pixels. It is a custom ROM that works exactly like an OEM. The reason this works is because the Pixels ship with the same type of cryptographic hardware security chip as modern computers with TPM/secure boot. This chip makes it possible to create a verified chain of trust in the device so that Graphene can do over the air updates to the device. The ROM is configured with root disabled and the full Android 3 party lockdown user space for regular operations. You still have root through developer mode and USB if you need it. I’ve done custom ROMs for many years in the past, but nothing compares to the Graphene experience. As far as I am concerned, Graphene’s list of supported devices is the entire list of available phones I will consider purchasing.

I need a "$ dnf where " command or equivalent

Problems are related to where distrobox is stashing a clblas and openblas package. It’s on the base system too, further complicating searches. I’ve tried all kinds of stuff with no luck. I hate the massive find command’s obfuscated ancient API of a full sized ANSI language specification. Python or C is easier than that...

A.I. is on a collision course with white-collar, high-paid jobs — and with unknown impact (www.cnbc.com)

A.I. is on a collision course with white-collar, high-paid jobs — and with unknown impact::Technology has disrupted many workplaces. Artificial intelligence like ChatGPT may have an outsized impact on higher-paid office jobs, experts said.

j4k3 ,
@j4k3@lemmy.world avatar

AI is a highly advanced tool. Prompt engineers are the new white collar force multiplier. All the stupid articles are due to the efforts of venture capital to establish a monopoly with proprietary AI by using manipulative propaganda. AI on its own is unreliable and dumb. It has no long term persistent memory. It is nowhere near an Artificial General Intelligence. The large language model is just a massive statistical analysis of subject categorization and a statistical probability of what word should come next. Its only “memory” is what happens in a conversation as it is happening. This is not “learned” information. It is simply tinting the subject and word probabilities as they are happening. Larger models just have more subjects and a larger lexicon that often includes several human languages.

I swear, at this point, The Terminator movie was the greatest propaganda film for billionaire AI dominion. The fear mongering is a joke. Go download Oobabooga. Then go to the github of AI, a website called hugging face. Find the Llama2 7b uncensored model by The Bloke and get the one that has GGLM in the name. This will run on almost any modern computer. GGLM means this model can run with split operations. Like, it can run on more than one GPU. This also means it can run on a CPU or a combined CPU and GPU. All you really need is at least 16GB or system memory. If you only have 16GB of system mem use the 4bit version. Uncensored means you can talk about anything. Ask it dumb stuff, you’re scared of like this fear mongering bullshit, these systems are not that bright. You’re not going to take over the world with this rat Brain. Ask it about that simple command line problem you spent the last hour sorting out, or what is wrong with your spreadsheet function, or how some other example works, or whatever the F some regex or sed command does. This is where LLMs are freaking awesome. If you barely explore this new tech the news sounds like a bunch of parroting idiots. Which it is.

j4k3 ,
@j4k3@lemmy.world avatar

My favorite has been exploring models and obscure software that is challenging to get working. When I get the inevitable failure, I just paste the entire error message into a WizardLM 30B model and I usually get quite helpful insights. I’ve gotten much further into compilation and finished installing projects I never would have managed otherwise. It has expanded my bashrc, and commands knowledge substantially. Sed, awk, and regex are easy now. I can practically get an AI to exit vim for me.

I just doubled the sysRAM in my machine 30min ago and am a quarter of the way into a 70B Llama 2 instruct GGLM. If the jump from 13B to 30B is any indication, this should be around 95%-98% accurate even with obscure technical questions.

What are the implications of having Anaconda running at the bootloader level as an installer for Fedora WS? (docs.fedoraproject.org)

I’ve been trying Workstation recently. Python dependency issues caused me to switch to Silverblue for the last 2 years. A new machine with Nvidia got me to try WS. I just had a mystery problem with Python after booting today and that got me looking into Anaconda. I didn’t know it was used under the kernel like this. I’m...

j4k3 OP ,
@j4k3@lemmy.world avatar

stackoverflow.com/…/is-anaconda-for-fedora-differ…

I should have searched for this first I guess. That is reassuring. I was mostly uncomfortable with the idea of the two being the same.

Still Anaconda from RH claims the software is mostly written in Python. That still makes me uneasy. I’ve always thought of C as very near to the hardware assembly and an interpreted language as prioritizing portability, flexibility, and access. I find it far harder to hack around with a binary written in C versus all of the python stuff I’ve encountered. Maybe I’m just mistaken in my understanding of how this code is running.

I look at C programs as tied more to the hardware they are compiled to run on with permanence. I look at python as a language designed to constantly depreciate itself and its code base. It feels like an extension of proprietary hardware planned obsolescence and manipulation. I don’t consider programs written in Python to have permanence or long term value because their toolchains become nearly impossible to track down from scratch.

j4k3 OP ,
@j4k3@lemmy.world avatar

I mean, I’m playing with offline AI right now and reproducibility sucks. Most of this is python based. I think I need to switch to Nix for this situation where I need to compile most tools from source. While python itself my be easily available for previous versions, sorting out the required dependencies is outside of what most users are capable of reproducing. I get the impression C is more centralized with less potential to cause problems. At least when it comes to hobbyist embedded projects, the stuff centered around C has staying power long term. The toolchains are easy to reproduce. Most python stuff is more trouble to reproduce than it is worth after it is a few years old. IMO it feels something like the Android SDK where obsolescence is part of the design. I shouldn’t need to track down all the previous package versions just to reproduce software. It should simply work by default when all of the dependencies are named. There should be something like a central library of deprecated packages that guarantees reproducibility. Even if these packages are available somewhere in the aether, piecing together relevant documentation for them is nearly impossible to sort out in historical context. This has been my experience as an intermediate user and hobbyist.

j4k3 ,
@j4k3@lemmy.world avatar

The basis of theft. Proprietary software is always about exploiting the end user through theft of ownership. Open Source has already beat these asshats at AI. No one wants to run their stalkerware in a world where any open and offline option exists. This is extremely obvious.

This is the one video to watch about what is happening with AI rn, especially when it comes to Meta/Llama-2 and its place in FOSS. Yann LeCun is former Bell Labs AI just like Stallman. (piped.video)

This is one of the few videos I’ve watched that has stayed on my mind for days. I’ve been exploring this space myself, and I really think the perspective presented here is the real future. All the stupid media and push back in this space seems like a big campaign to try to salvage OpenAI and Google’s massive investments in...

j4k3 ,
@j4k3@lemmy.world avatar

The US doesn’t support the attacks just like Russia is fighting Nazis. The equipment just fell of the truck, and all of our people take long smoke breaks, while being a smoke free force.

j4k3 ,
@j4k3@lemmy.world avatar

They are just desperate to gain market share as fast as possible, because the proprietary business model is already dead. Llama-2 is open to commercial use and mostly open source. Like, I hate meta more than most, but the guy in charge of AI, VP, Yann LeCun is a former Bell Labs guy and is full steam ahead on Open Source. The benchmarks for the latest model are really good even at the 7B level. (You need twice the VRAM of the base weights to run a model so 7B = 14GB VRAM on a GPU). This is accessible on mid level hardware and it can be run with no major strings attached for anything less than 700M users. There are already versions of this model available without the safety filter lobotomy too. In the next few months, paying stupid subscription services to stalk you will be relegated to people who do not care about, or understand the implications of, protecting one’s privacy and freedom of information.

j4k3 OP ,
@j4k3@lemmy.world avatar

Cost of living is not magical. It is created by the opportunities available. If people can pay more for the house or rent, the market will adjust to it. The regulating factor is simply the opportunities available. All other factors are peripheral.

If you can export money by working remotely, you’re floating on a rare exception to the rule. If everyone could do the same, the cost of living would adjust to compensate. You are essentially taking the same risk as an ancient merchant on a ship. When the circumstances change, you can easily find yourself stuck in a place without any opportunities.

The fallacy is looking at cost of living as some kind of magical random generated number. It is not. It is a direct measure of the opportunities available to the average person. It doesn’t matter where you live, how poor or how rich the area seems, the average person is encountering the exact same pressure and stress about simply staying afloat. The grass is not greener on either side of the fence. The only difference is the availability of opportunities for the average person in an area.

j4k3 ,
@j4k3@lemmy.world avatar

Llama-2 because you can run it on your own hardware. For the big GPU on a rented instance: Falcon 70b. OpenAI and Google can have turns playing proprietary asshat jack in the box.

j4k3 ,
@j4k3@lemmy.world avatar

I think the only real measure is ‘consciousness’ as I will argue that a PMIC chip would qualify as ‘sentience’ when it comes to detection and logical management of real world variable systems.

If ‘sapience’ is just wisdom, and wisdom is just the timely application of knowledge, any current transformer based LLM can take in knowledge and distil a solution that could be called wisdom if it came from a human.

Consciousness is hard to test. How does a human generated dataset create a model with the ability to tell us the answers to questions we can not ourselves answer about life, the universe, and existence?

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • lifeLocal
  • goranko
  • All magazines