There have been multiple accounts created with the sole purpose of posting advertisement posts or replies containing unsolicited advertising.

Accounts which solely post advertisements, or persistently post them may be terminated.

scorpious ,

To be fair, M-series Macs are pretty insanely efficient with memory. Unless you’ve actually used one extensively, I can understand the attitudes here…BUT:

I’ve done broadcast animation for many years, and back in ‘21 delivered an entire season of info/explainer-type pieces for a network show — using Motion, Cinema 4D, and After Effects (+ Ai and Ps) — all of it running on a base-level, first-gen M1 Mini (8/256). Workflow was fast and smooth; even left memory-pig apps running in the background most of the time…not one hiccup. Oh, and everything was delivered in 4k.

So 8gb actually is plenty for most folks…even professionals doing some heavy lifting. Sure I’d go for 16 next one, but damn I was/am still impressed. (Maybe it sucks for gaming, I don’t do that so have no clue).

aleph , (edited )
@aleph@lemm.ee avatar

It’s clear that the M3 MacBooks are noticably slower with 8GB or RAM than with 16GB for various tasks, though, including photo & video editing, and 3D rendering.

Sure, 8GB gets the job done but why are Apple selling “professional” grade laptops in this price range that clearly require additional memory to reach peak performance?

scorpious ,

Point taken! Clearly more is always better. Don’t have any experience with the M2 or 3.

I’m just adding a personal experience with having the minimum be plenty to get big jobs done.

tsonfeir ,
@tsonfeir@lemm.ee avatar

I get more because I know I’ll need more. I don’t get less and then complain I should have gotten more even though I knew I couldn’t upgrade later.

Really, Apple just shouldn’t have said what they did and they wouldn’t be in hot water.

freeman ,

It doesn’t matter how ‘insanely efficient’ they are. If your tasks need to use more than 8Gb of memory you are going to run out and start swapping to disk.

8gb worth of data is not heavy lifting for professional use.

scorpious ,

…And yet…?

My point is that while of course more is better, 8 sufficed for me…a professional, doing demanding…professional…work.

freeman ,

Sufficed is not an objective term but still is not a favorable term especially for machines that cost that much.

Your original point was that apple’s cpu are somehow more ‘efficient’ with ram. That’s misinformation to put it kindly.

kalleboo ,

It mostly just shows how crazy fast modern SSDs are that they can do swap duties with performance that is acceptable to many people. The SSD in my MacBook Pro can read/write at 5-6 GB/s. That means it can write out the whole 8 GB of memory of one of those smaller machines in under 2 seconds. As long as your current task fits in 8 GB and you’re fine waiting 2 seconds to switch between apps…

freeman ,

Yes if you don’t run out of ram you won’t face ram performance issues…

I wouldn’t be ok waiting 2 seconds to switch between apps on something the price of Mac laptop, even the cheapest m1.

woelkchen ,
@woelkchen@lemmy.world avatar

To be fair, at the price point of Macs, 16GB is easily achievable.

mariusafa ,

It’s okay if you run efficient OS on it, not the case.

dan1101 ,

That doesn’t help with memory hungry apps though.

iopq ,

There are people who never touch anything but the browser and email. For them the SSD keeping some page files is good enough

aleph ,
@aleph@lemm.ee avatar

That’s no justification for selling a >$1,000 MacBook Pro with only 8GB of RAM, though. It’s specifically marketed as a professional-class machine.

Dariusmiles2123 ,

Yeah clearly the PRO part with just 8Gb of RAM is the problem.

Come on that’s what I have on my Surface Go 1 from 2019. It runs Linux perfectly and is okay for my needs, but I wouldn’t put such specs on a PRO thing😅

Rai ,

OSX is waaaay more memory efficient than windows…

mariusafa ,

Yeah my blueprint of efficient os it isn’t Windows also.

morrowind ,
@morrowind@lemmy.ml avatar

The recent stats I’ve seen indicate macOS usually uses more ram

uis ,

Anything is way more efficient than windows. That’s very low(or high, but you need to go under it) bar.

Can your macos run on router with 32MB RAM? Or on most powerful supercomputer? Or both?

disguy_ovahea , (edited )

And it’s not RAM, it’s UM for an SoC. The usage of memory changed with the introduction of Apple Silicon.

billiam0202 ,

“Unified” only means there’s not a discrete block for the CPU and a discrete block for the GPU to use. But it’s still RAM- specifically, LPDDR4x (for M1), LPDDR5 (for M2), or LPDDR5X (for M3).

Besides, low-end PCs with integrated graphics have been using unified memory for decades- no one ever said “They don’t have RAM, they have UM!”

disguy_ovahea , (edited )

Yes, that’s true, but it’s still an indicator of an uninformed reporter.

Apple Silicon chips pass data from one dedicated cores directly to another without the need of passing through memory, hence the smaller processor cache. There are between 18 and 58 cores in the M3 (model dependent). The architecture works very differently than the conventional CPU/GPU/RAM model.

I can run FCP and Logic Pro and have memory to spare with 16GB of UM. The only thing that pushes me into swap is Chrome. lol

BearOfaTime ,

It’s a pointless distinction.

And in this case, it makes 8gig look even worse.

disguy_ovahea ,

Maybe you’re not familiar with the apps I’m referring to. Final Cut Pro and Logic Pro are professional video and audio workstations.

If I tried to master an export from Adobe Premiere Pro in Protools on PC I’d need 32GB of RAM to to prevent stutter. I only use ~12GB of 16GB doing the same on Apple Silicon.

8GB of UM is not for someone running two pro apps at once. It’s for grandma to use for online banking and check her email and Facebook.

billiam0202 ,

it’s still an indicator of an uninformed reporter.

My dude, you’re literally in here arguing that because Apple has a blob for both CPU memory and GPU memory that somehow makes that blob “not RAM.” Apple’s design might give fantastic performance, but that’s irrelevant to the fact that the memory on the chip is RAM of known and established standards.

disguy_ovahea ,

Read my other replies to this comment. There’s no GPU. It’s an SoC.

uis ,

BCM2835 is SoC too. And RK3328. And Mali-450 is GPU.

disguy_ovahea ,

apple.com/…/apple-unveils-m3-m3-pro-and-m3-max-th…

Each power intensive process is given its own dedicated core. The OS is designed specifically to send dedicated processes to the associated core. For example, your CPU isn’t bogged down decrypting data while loading an application.

You can’t compare it to anything else out at this time. Just learn about it, or don’t. Guessing is just a waste of time.

uis ,

You can’t compare it to anything else out at this time.

docs.kernel.org/scheduler/sched-capacity.html

For example, your CPU isn’t bogged down decrypting data while loading an application.

Basic priority-based scheduling.

disguy_ovahea ,

Sent to one of two processors on a PC, or 18-52 dedicated cores in an M chip.

uis ,

Great topic switch. Also what century do you live?

disguy_ovahea ,

The topic is substantiating that 8GB of UM on an Apple Silicon Mac being acceptable for a base model.

I’ve explained how the UM is used strictly as a storage liaison due to the processor having a multitude of dedicated cores, with the ability to pass data directly without utilizing UM.

I don’t know what you want from me, but maybe you should just do your own homework instead of being combative with people who understand something better than you.

uis ,

I’ve explained how the UM is used strictly as a storage liaison due to the processor having a multitude of dedicated cores, with the ability to pass data directly without utilizing UM.

I really doubt they run apps with cache turned into scratchpad memory.

BearOfaTime ,

Like has been done on laptops with on-board video cards since, well, forever?

disguy_ovahea ,

It’s different. The GPU is broken into several parts and integrated into the SoC along with the CPU’s dedicated processes. Data is passed within the SoC without entering UM. It’s exclusively used as a storage liaison.

You should check out Apple Silicon M-Series. Specs don’t translate to performance in the way conventional PC architecture does. I guarantee you’ll see PC manufacturers going to 2nm SoC configurations soon enough. The performance is undeniable.

uis ,
disguy_ovahea ,
uis ,

So it’s not on same chip with CPU?

disguy_ovahea , (edited )

A CPU performs integer math.

A GPU performs floating-point math.

Those are only two of the 18-52 cores (model dependent) of Apple M chips. The OS is designed around this for maximum efficiency. Most Macs don’t even have a fan anymore.

There. Is. No. Comparison. In. PC.

uis ,

A CPU performs liner math.

A GPU performs floating-point math.

A GPU performs integer math.

A CPU performs floating point math.

All four statements are true.

disguy_ovahea ,

That’s correct. My mistake.

n3m37h ,

Dude it is just GDDR#, the same stuff consoles use PC’s have had this ability for over a decade there mate apple is just good at marketing.
What’s next? When VRAM overflows it gets dumped into regular ram? Oh wait PC’s can do that too…

disguy_ovahea ,

With independent CPU and GPU, sure. There’s no SoC that performs anywhere near Apple Silicon.

n3m37h ,

According to benchmarks the 8700G vs M3 is on average 22% slower single core, and is 31% faster multicore, FP32 is 41% higher than the M3 and AI is 54% slower 8700G also uses 54% more energy

What about those stats says AMD can’t compete? 8700G is a APU just as is the M3

disguy_ovahea ,

I’m talking about practical use performance. I understand your world, you don’t understand mine. I’ve been taking apart and upgrading PCs since the 286. I understand benchmarks. What you don’t understand, is how MacOS uses the SoC in a way where benchmarks =/= real-world performance. I’ve used pro apps on powerful PCs and powerful Macs, and I’m speaking from experience. We can agree to disagree.

n3m37h ,

I grew up with a Tandy 1000 and was always getting yelled at for taking it apart along with just about every PC we owned after than too.

Benchmarks are indicative of real world performance for most part. If they were useless we wouldn’t use them, kinda like userbenchmark.

The one benefit apple does have is owning its own ecosystem where they can modify the silicon/OS/Software to work with each other better.

Does not mean the M3 is the best there is and can’t be touched, that is just misleading

8700G is gonna stomp the M3 using Maxton’s software suite just as the M3 will stop the 8700G using Apples software suite.

Then also on-top if that the process node for manufacturing said silicon is different (3nm vs 4nm) that alone allows for a 20% (give or take some) performance difference just like every process node change in the past decade or so

I’ll take the loss on the experience part as the only apple product I own is an Apple TV 4k, but there are many nuances you’ve obviously glossed over

Is the M3 a good piece of silicon? Yes Is it the best at EVERYTHING? Of course not Should apple give up because they are not the best? Fuck no

disguy_ovahea ,

Man, you’re kinda off the point. This is about how much UM is appropriate for a base model. I’m simply saying the architecture of an SoC utilizes UM as a storage liaison exclusively, since CPU and GPU are cores of the same chip. It simply does not mean the same thing as 8GB of RAM in standard architecture. As a pro app user, 16GB is enough. 8GB is plenty for grandma to check her Facebook and online banking.

n3m37h ,

There’s no SoC that performs anywhere near Apple Silicon.

Am I missing the point really? UM is not a new concept. Specifically look at the PS5/X:SX

pcgamer.com/this-amd-mini-pc-kit-is-likely-made-o…

Notice the soldered RAM and lack of video card? Kinda like what the M series does.

And when all is said and done, 8gb is not nearly enough and apple should be chastised for just like Nvidia when they first decided to make 5 different variations of the 1060 making sure 4 of those variations will become ewaste in a few short years and again with the 3050 6gb vs 3050 8gb

disguy_ovahea ,

They both have have independent CPU and GPU. UM is not used to pass from CPU to GPU on an SoC system, it’s exclusively a storage liaison. Therefore it’s used far less than in non SoC applications.

The CPU and GPU are one chip. Learn about Apple Silicon SoC rather than trying to find a comparison. You won’t find one anywhere yet.

Ghostalmedia , (edited )
@Ghostalmedia@lemmy.world avatar

Have you used an 8 gig ARM Mac?

I’m pretty brutal on my machines, and if my 8 gig m1 really only starts to beach ball when multiple accounts are open, and those accounts all have bloated multimedia software running.

My 16 gig machines can handle that use case fine, but the 8 gig machine will occasionally beach ball.

Personally, I won’t buy an 8 gig config again. But I’m a fucking monster that leaves a million bloated things open across multiple active user sessions.

paraphrand ,

Even if they are right, no one cares and it will always be a bad look.

tal ,
@tal@lemmy.today avatar

I’m fine with this.

I mean, I have no interest in an 8GB machine, but it’s also fair to say that there definitely are people who are fine with it, and who would like to save the money. Say you’ve got four kids and you’re buying them all laptops – I dunno if that’s the thing parents do these days, or whether kids typically just get by on smartphones or what. And sometimes they get broken or whatnot, and you’re paying for the other expenses associated with those kids. That money adds up.

Apple runs a walled garden, unless things have changed in recent years while I wasn’t watching. They tried opening up to third-party hardware vendors back around 2000 with some third-party PowerPC vendors, found that too many users were buying that hardware instead of theirs, and killed off the clone vendors. That means that if you want to use MacOS, you have to buy Apple hardware. And so there’s good reason to have a broad range of offerings from Apple, even some that are higher-end or lower-end than the typical user might want, because Apple is the only option that MacOS users have. If I want to run Linux on a machine with 2GB of memory, I can do it, and if I want to run Linux on a machine with 256GB GB of memory, I can do it. MacOS users need to have an offering from Apple to do that.

Plus, I assume that these are running some form of solid-state storage, which makes hitting virtual memory a lot less painful than was the case in the past.

paraphrand ,

I agree. But we still have to listen to all the bitching.

ABCDE ,

We both have 8GB Airs in our house, an M1 and an M2. They run just fine.

lud ,

The thing is that Apple charges three kidneys per gigabyte over 8 GB.

dual_sport_dork , (edited )
@dual_sport_dork@lemmy.world avatar

If you’ve got four kids and you’re buying them all laptops, I don’t think buying them all Macs and “saving money” by getting cut-down machines with too little memory (or whatever other hobbling Apple may cook up now or later) is exactly the smart play. You would need to have a very compelling reason to absolutely have to run MacOS to the exclusion of everything else which if we’re honest, most people don’t.

A Lenovo IdeaPad Slim, just to pick an example out of a hat that contains many other options, costs half as much as the low spec 2024 Macbook Air the article is spotlighting while having double the RAM, double the SSD, and, you know, ports. For the cost of a 8GB Macbook Pro you could buy a Legion Slim with an i7 and an RTX4060 in it and have change left over, a machine which would blow that Mac out of the water.

There are a lot of things you can say about Macbooks, but being a good value for the money is consistently never one of them.

proton_lynx ,

Save money, buy an Apple computer. Choose one.

sugar_in_your_tea ,

Well yeah, they’re enough to meet the minimum use cases so they can upsell most people on expensive RAM upgrades.

That’s why I don’t buy laptops with soldered RAM. That’s getting harder and harder these days, but my needs for a laptop have also gone down. If they solder RAM, there’s nothing you can (realistically) do if you need more, so you’ll pay extra when buying so they can upcharge a lot. If it’s not soldered, you have a decent option to buy RAM afterward, so there’s less value in upselling too much.

So screw you Apple, I’m not buying your products until they’re more repair friendly.

akilou ,

I had a extra stick of RAM available the other day so I went to open my wife’s Lenovo to see if it’d take it and the damn thing is screwed shut with the smallest torx screws I’ve ever seen, smaller than what I have. I was so annoyed

SpaceNoodle ,

The real question is why you don’t have a complete precision screwdriver set.

akilou ,

I thought I did! Until I got the smallest one out and it just spun on top of the screw

sugar_in_your_tea ,

I bought the E495 because the T495 had soldered RAM and one RAM slot, while the E495 had both RAM slots replacable. Adding more RAM didn’t need any special tools. Newer E-series and T-series both have one RAM slot and some soldered RAM. I’m guessing you’re talking about one of the consumer lines, like the Yoga series or something?

That said, Lenovo (well, Motorola in this case, but Lenovo owns Motorola) puts all kinds of restrictions to your rights if you unlock the bootloader of their phones (PDF version of the agreement). That, plus going down the path of soldering RAM gives me serious concerns about the direction they’re heading, so I can’t really recommend their products anymore.

If I ever need a new laptop, I’ll probably get a Framework.

akilou ,

Yeah, it’s a Yoga

Capricorn_Geriatric ,

all kinds of restrictions to your rights

The document mentions a lot of US laws. I wonder if they try the same over in the EU.

sugar_in_your_tea ,

I’m guessing it wouldn’t hold. But I’m in the US, so I’ll just avoid their phones going forward, and will probably avoid their laptops and whatnot as well just due to a lack of trust.

tal , (edited )
@tal@lemmy.today avatar

I keep looking at the Frameworks, because I’m happy with the philosophy, but the problem is that the parts that they went to a lot of trouble to make user-replaceable are the parts that I don’t really care about.

They let you stick a fancy video card on the thing. I’d rather have battery life – I play games on a desktop. If they’d stick a battery there, that might be interesting.

They let you choose the keyboard. I’m pretty happy with current laptop keyboards, don’t really need a numpad, and even if you want one, it’s available elsewhere. I’ve got no use for the LED inserts that you can stick on the thing if you don’t want keyboard there.

They let you choose among sound ports, Ethernet, HDMI, DisplayPort, and various types of USB. Maybe I could see putting in more USB-C then some other vendors have. But the stuff I really want is:

  • A 100Wh battery. Either built-in, or give me a bay where I can put more internal battery.
  • A touchpad with three mechanical buttons, like the Synaptics ones that the Thinkpads have.

The fact that they aren’t soldering in the RAM and NVMe is nice in that they’re committing to not charging much more then market rate, so I guess they should get credit for that, but they are certainly not the only vendor to avoid soldering those.

sugar_in_your_tea ,

Yeah, ThinkPad used to allow either a CD drive or an extra battery in their T-series. They stopped offering the extra battery and started soldering RAM, so I got the cheaper E-series (might as well save cash if I can get what I want).

I think there’s a market there. Have an option for a hot-swap battery to bring on trips and use the GPU at home. Serious travelers could even bring a spare battery to keep working for longer.

touchpad with three mechanical buttons

Yes please! And give me the ThinkPad nipple as well. :) If they had those, I’d not bother with even looking at Lenovo. The middle button is so essential to my normal workflow that any other laptop (including my fancy MacBook for work) feels crappy.

I’m guessing the things they made modular are just the low hanging fruit. It’s pretty easy to make a USB-C to whatever port, it’s a bit harder to make a pluggable battery in a slot that can also support a GPU.

tal , (edited )
@tal@lemmy.today avatar

I don’t know if I’d recommend it, but if you are absolutely set on having the Thinkpad nipple – I don’t use it, even if I really want the Thinkpad trackpad – the factory that made the original IBM Model M keyboards is still in business somewhere in Kentucky. IIRC the employees bought it or something when IBM stopped making the things. They offer a nipple keyboard, goes by the name of “Endura Pro”. checks Unicomp. That’s the remnants in the US of the IBM business; the Chinese Lenovo purchased the laptops and also do the Trackpoint.

I got one like twenty years back, and while the actual buckling-spring keyswitches on the keyboard are pretty much immune to time, I wore out the switches on the mouse buttons, so I don’t know if I can give a buy recommendation for the mouse-enabled version (though maybe they improved the switches there). But if you really, really like it, that might be worthwhile for you. Last I looked they were still making them.

checks

They’ve got a message up saying that a supplier of a component used in that keyboard went under due to COVID so they suspended production. I don’t know what the status is on that.

www.pckeyboard.com/mm5/merchant.mvc?Screen=CTGY&C…

NOTICE CONCERNING AVAILABILITY – Unfortunately, we have had to temporarily suspend the sale of the Endura Pro keyboards due to another supply chain shortage. The supplier of one of the flex harnesses had to close their doors during the pandemic. We’ve begun the task of sourcing a new supplier but do not have a definite time frame for when these keyboards will be available again. For our customers with orders already placed, we have enough stock to complete all on order.

Keep in mind that this is a very large, heavy keyboard that you could brain someone with; if you’re going to haul it around with a laptop, it’s going to be larger and heavier than the laptop. Mentioning it mostly since I figure that you might use it at some location where you could leave the keyboard.

sugar_in_your_tea ,

The thing is, I only like the Trackpoint in a laptop. It’s really nice to scroll while holding the middle mouse button and just shifting my finger. That way, my hand is ready to type, unlike using the trackpad, where I have to move my hands to type, and it works well in my largely keyboard-driven workflow (ViM for text editing, Trackpoint for web browsing).

On a desktop, I have multiple screens and way more real estate, so the Trackpoint isn’t nearly as effective and it’s worth using the mouse instead.

But I honestly don’t use my laptop all that often, so it’s something I’m fine doing without. But all other things being similar, I’ll prefer the Trackpoint since it’s a nice value add.

It’s cool that they’re making those keyboards though. I have and nice mechanical keyboards, so I’m not looking for one, but I would be very interested in a Framework-compatible keyboard with a Trackpoint.

tal ,
@tal@lemmy.today avatar

smallest torx screws I’ve ever seen

Torx is legitimately useful for small screws, because it’s more resistant to stripping than Phillips.

Now, if they start using Torx security bits or some oddball shapes, then they’re just being obnoxious. But there are not-trying-to-obstruct-the-customer reasons not to use Phillips.

generichate1546 ,

IFixit kit is a great toolset from the site that has every type of bit in it.

NekkoDroid ,
@NekkoDroid@programming.dev avatar

Got myself an IFixit Mako a while ago, really nice even if I mostly just use the philips head ones

generichate1546 ,

Right? It’s nice to have the occasional reverse tri head metric upside down weird random bit when you need it.

seth ,

Does it have triangle bits? Nintendo uses some really unusual driver shapes.

generichate1546 ,

I’ve taken apart so so so many things… sometimes for the right reasons and sometimes for the wrong reasons…my ZuneHD still works. I’ll never ever try to open a Surface product.

user224 ,
@user224@lemmy.sdf.org avatar

That’s why I don’t buy laptops with soldered RAM.

Oh, that shit is soldered on…
I mean, I did see that on some laptops, but only those cheap things in €150 range (new) which even use eMMC for storage.

sugar_in_your_tea ,

Yup, all Apple laptops have soldered RAM for some years now…

cmnybo ,

It became pretty common even on higher end laptops when they switched to DDR5, but some manufacturers are starting to go back to socketed RAM.

BorgDrone ,

That’s why I don’t buy laptops with soldered RAM.

In my opinion disadvantages of user-replaceable RAM far outweigh the advantages. The same goes for discrete GPUs. Apple moved away from this and I expect PC manufacturers to follow Apple’a move in the next decade or so, as they always do.

sugar_in_your_tea ,

Here’s how I see the advantages of soldered RAM:

  • better performance
  • less risk of physical damage
  • more energy efficient
  • smaller

The risk of physical damage is so incredibly low already, and energy use of RAM is also incredibly low, so neither of those seem important.

So that leaves performance, which I honestly haven’t found good numbers for. If you have this, I’m very interested, but since RAM speed is rarely the bottleneck in a computer (unless you have specific workloads), I’m going to assume it to be a marginal improvement.

So really, I guess “smaller” is the best argument, and I honestly don’t care about another half centimeter of space, it’s really not an issue.

BorgDrone ,

So that leaves performance, which I honestly haven’t found good numbers for. If you have this, I’m very interested, but since RAM speed is rarely the bottleneck in a computer (unless you have specific workloads), I’m going to assume it to be a marginal improvement.

This is where you’re mistaken. There is one thing that integrated RAM enables that makes a huge difference for performance: unified memory. GPUs code is almost always bandwidth limited, which why on a graphics card the RAM is soldered on and physically close to the GPU itself, because that is needed for the high bandwidth requirements of a GPU.

By having everything in one package, CPU and GPU can share the same memory, which means that you eliminate any overhead of copying data to/from VRAM for GPGPU tasks. But there’s more than that, unified memory doesn’t just apply to the CPU and GPU, but also other accelerators that are part of the SoC. What is becoming increasingly important is AI acceleration. UMA means the neural engine can access the same memory as the CPU and GPU, and also with zero overhead.

This is why user-replaceable RAM and discrete GPUs are going to die out. The overhead and latency of copying all that data back and forth over the relatively slow PCIe bus is just not worth it.

sugar_in_your_tea ,

Do you have actual numbers to back that up?

The best I’ve found is benchmarks of Apple silicon vs Intel+dGPU, but that’s an apples to oranges comparison. And if I’m not mistaken, Apple made other changes like a larger bus to the memory chips, which again makes comparisons difficult.

I’ve heard about potential benefits, but without something tangible, I’m going to have to assume it’s not the main driver here. If the difference is significant, we’d see more servers and workstations running soldered RAM, but AFAIK that’s just not a thing.

BorgDrone ,

The best I’ve found is benchmarks of Apple silicon vs Intel+dGPU, but that’s an apples to oranges comparison.

The thing with benchmarks is that they only show you the performance of the type of workload the benchmark is trying to emulate. That’s not very useful in this case. Current PC software is not build with this kind of architecture in mind so it was never designed to take advantage of it. In fact, it’s the exact opposite: since transferring data to/from VRAM is a huge bottleneck, software will be designed to avoid it as much as possible.

For example: a GPU is extremely good at performing an identical operation on lots of data in parallel. The GPU can perform such an operation much, much faster than the CPU. However, copying the data to VRAM and back may add so much additional time that it still takes less time to run it on the CPU, a developer may then choose to run it on the CPU instead even if the GPU was specifically designed to handle that kind of work. On a system with UMA you would absolutely run this on the GPU.

The same thing goes for something like AI accelerators. What PC software exists that takes advantage of such a thing?

A good example of what happens if you design software around this kind of architecture can be found here. This is a post by a developer who worked on Affinity Photo. When they designed this software they anticipated that hardware would move towards a unified memory architecture and designed their software based on that assumption.

When they finally got their hands on UMA hardware in the form of an M1 Max that laptop chip beat the crap out of a $6000 W6900X.

We’re starting to see software taking advantage of these things on macOS, but the PC world still has some catching up to do. The hardware isn’t there yet, and the software always lags behind the hardware.

I’ve heard about potential benefits, but without something tangible, I’m going to have to assume it’s not the main driver here. If the difference is significant, we’d see more servers and workstations running soldered RAM, but AFAIK that’s just not a thing.

It’s coming, but Apple is ahead of the game by several years. The problem is that in the PC world no one has a good answer to this yet.

Nvidia makes big, hot, power hungry discrete GPUs. They don’t have an x86 core and Windows on ARM is a joke at this point. I expect them to focus on the server-side with custom high-end AI processors and slowly move out of the desktop space.

AMD has the best papers for desktop. They have a decent x86 core and GPU, they already make APUs. Intel is trying to get into the GPU game but has some catching up to do.

Apple has been quietly working towards this for years. They have their UMA architecture in place, they are starting to put some serious effort into GPU performance and rumor has it that with M4 they will make some big steps in AI acceleration as well. The PC world is held back by a lot of legacy hard and software, but there will be a point where they will have to catch up or be left in the dust.

Turun ,

I understand the scepticism, but without links of what you’ve found or which parts in particular you consider dubious claims (ram speed can be increased when soldered, higher speeds lead to better performance, etc) it comes across as “i don’t believe you, because i choose to not believe you”

LTT has made a comparison video on ram speeds: www.youtube.com/watch?v=b-WFetQjifc

Do you need proof that soldered ram can be made to run faster?

sugar_in_your_tea ,

Yes, and the results from that video (i assume, I skimmed it, but have watched similar videos) is that the difference is negligible (like 1-10FPS) and you’re usually better off spending that money on something else.

I look at the benchmarks between the Intel MacBook Pro and the M1 MacBook Pro, and both use soldered RAM, yet the M1 gets so much better performance, even on non-GPU tasks (e.g. memory-heavy unit tests at work went from 3-5min to 45-50sec from latest Intel to M1). Docker build times saw a similar drop. But it’s hard for me to know what the difference is between memory vs CPU changes. I’d have to check, but I’m guessing there’s also the DDR4 to DDR5 switch, which increases memory channels.

The claim is that proximity to the CPU explains it, but I have trouble quantifying that. For me, a 1-10FPS drop isn’t enough to reduce repairability and expandability. Maybe it is for others though, but if that’s the difference, that’s a lot less than the claims they seem to make.

Turun ,

The video has a short section on productivity (i.e. rendering or compiling). That part is probably the most relevant for most people. Check the chapter view in YouTube to jump directly to it.

I think a 2x performance improvement is plausible when comparing non-soldered ram to the Apple silicon, which goes even further and has the memory on the die itself. If, of course, ram is the limiting factor.

The advantages of upgradable, expandable ram are obvious. But let’s face it: most people don’t need and even less use that capability.

sugar_in_your_tea ,

short section on productivity

Looks about the same as the rest. Big gains for handbrake, pretty much nothing for anything else. And that makes sense, because handbrake will be doing lots of roundtrips to the GPU for encoding.

has the memory on the die itself

On the package, not the die. But perhaps that’s what you meant. On die would be closer to a massive cache like on the X3D AMD chips.

The performance improvement seems to be that Apple has a massive iGPU, not anything to do with RAM next to the CPU. So in CPU-only benchmarks, I’d expect the lion’s share of the difference to be CPU design and process node, not the memory.

Also, unified memory isn’t particularly new, APUs have supported it for years. It’s just not well utilized by devs because most users have dGPUs. So I think the main innovation here is Apple committing to it and providing tooling for devs to utilize the unified memory better, like console manufacturers have done.

So I guess that brings a few more questions:

  • what performance improvements could we see if devs use unified memory in socketed LPDDR memory in laptops?
  • how would that compare to Apple’s on-package RAM (I think it’s also LPDDR, so more apples to apples?)?
  • how likely are AMD and Intel to push for massive APUs on laptops?

I guess we’re kind of seeing it with the gaming PC handhelds, like Steam Deck and Ayaneo etc al, so maybe that’ll become more mainstream.

__dev ,

“unified memory” is an Apple marketing term for what everyone’s been doing for well over a decade. Every single integrated GPU in existence shares memory between the CPU and GPU; that’s how they work. It has nothing to do with soldering the RAM.

You’re right about the bandwidth though, current socketed RAM standards have severe bandwidth limitations which directly limit the performance of integrated GPUs. This again has little to do with being socketed though: LPCAMM supports up to 9.6GT/s, considerably faster than what ships with the latest macs.

This is why user-replaceable RAM and discrete GPUs are going to die out. The overhead and latency of copying all that data back and forth over the relatively slow PCIe bus is just not worth it.

The only way discrete GPUs can possibly be outcompeted is if DDR starts competing with GDDR and/or HBM in terms of bandwidth, and there’s zero indication of that ever happening. Apple needs to puts a whole 128GB of LPDDR in their system to be comparable (in bandwidth) to literally 10 year old dedicated GPUs - the 780ti had over 300GB/s of memory bandwidth with a measly 3GB of capacity. DDR is simply not a good choice GPUs.

BorgDrone ,

“unified memory” is an Apple marketing term for what everyone’s been doing for well over a decade.

Wrong. Unified memory (UMA) is not an Apple marketing term, it’s a description of a computer architecture that has been in use since at least the 1970’s. For example, game consoles have always used UMA.

Every single integrated GPU in existence shares memory between the CPU and GPU; that’s how they work.

Again, wrong.

While iGPUs have existed for PCs for a long time, they did not use a unified memory architecture. What they did was reserve a portion of the system RAM for the GPU. For example on a PC with 512MB RAM and an iGPU, 64MB may have been reserved for the GPU. The CPU then had access to 512-64 = 448MB. While they shared the same physical memory chips, they both had a separate address space. If you wanted to make a texture available to the GPU, it still had to be copied to the special reserved RAM space for the GPU and the CPU could not access that directly.

With unified memory, both CPU and GPU share the same address space. Both can access the entire memory. No RAM is reserved purely for the GPU. If you want to make something available to the GPU, nothing needs to be copied, you just need to point to where it is in RAM. Likewise, anything done by the GPU is immediately accessible by the CPU.

Since there is one memory pool for both, you can use RAM more efficiently. If you have a discrete GPU with 16GB VRAM, and your app only needs 8GB VRAM, that other memory just sits there being useless. Alternatively, if your app needs 24GB VRAM, you can’t run it because your GPU only has 16B, even if you have lots of system RAM available.

With UMA you can use all the RAM you have for whatever you need it for. On an M2 Ultra with 192GB RAM you can use almost all of that for the GPU (minus a little bit that’s used for the OS and any running apps). Even on a tricked out PC with a 4090 you can’t run anything that needs more than 24GB VRAM. Want to run something where the GPU needs 180MB of memory? No problem on an M1 Ultra.

It has nothing to do with soldering the RAM.

It has everything to do with soldering the RAM. One of the reason iGPUs sucked, other than not using UMA, is that GPUs performance is almost limited by memory bandwidth. Compared to VRAM, standard system RAM has much, much less bandwidth causing iGPUs to be slow.

A high-bandwidth memory bus, like a GPU needs, has a lot of connections and runs at high speeds. The only way to do this reliably is to physically place the RAM very close to the actual GPU. Why do you think GPUs do not have user-upgradable RAM?

Soldering the RAM makes it possible to integrate a CPU and an non-sucking GPU. Go look at the inside of a PS5 or XSX and you’ll see the same thing: an APU with the RAM chips soldered to the board very close to it.

This again has little to do with being socketed though: LPCAMM supports up to 9.6GT/s, considerably faster than what ships with the latest macs.

LPCAMM is a very recent innovation. Engineering samples weren’t available until late last year and the first products will only hit the market later this year. Maybe this will allow for Macs with user-upgradable RAM in the future.

The only way discrete GPUs can possibly be outcompeted is if DDR starts competing with GDDR and/or HBM in terms of bandwidth

What use is high bandwidth memory if it’s a discrete memory pool with only a super slow PCIe bus to access it?

Discrete VRAM is only really useful for gaming, where you can upload all the assets to VRAM in advance and data practically only flows from CPU to GPU and very little in the opposite direction. Games don’t matter to the majority of users. GPGPU is much more interesting to the general public.

__dev ,

Wrong. Unified memory (UMA) is not an Apple marketing term, it’s a description of a computer architecture that has been in use since at least the 1970’s. For example, game consoles have always used UMA.

Apologies, my google-fu seems to have failed me. Search results are filled with only apple-related results, but I was now able to find stuff from well before. Though nothing older than the 1990s.

While iGPUs have existed for PCs for a long time, they did not use a unified memory architecture.

Do you have an example, because every single one I look up has at least optional UMA support. The reserved RAM was a thing but it wasn’t the entire memory of the GPU instead being reserved for the framebuffer. AFAIK iGPUs have always shared memory like they do today.

It has everything to do with soldering the RAM. One of the reason iGPUs sucked, other than not using UMA, is that GPUs performance is almost limited by memory bandwidth. Compared to VRAM, standard system RAM has much, much less bandwidth causing iGPUs to be slow.

I don’t disagree, I think we were talking past each other here.

LPCAMM is a very recent innovation. Engineering samples weren’t available until late last year and the first products will only hit the market later this year. Maybe this will allow for Macs with user-upgradable RAM in the future.

Here’s a link to buy some from Dell: www.dell.com/en-us/shop/…/memory. Here’s the laptop it ships in: www.dell.com/en-au/…/precision-16-7670-laptop. Available since late 2022.

What use is high bandwidth memory if it’s a discrete memory pool with only a super slow PCIe bus to access it?

Discrete VRAM is only really useful for gaming, where you can upload all the assets to VRAM in advance and data practically only flows from CPU to GPU and very little in the opposite direction. Games don’t matter to the majority of users. GPGPU is much more interesting to the general public.

gestures broadly at every current use of dedicated GPUs. Most of the newfangled AI stuff runs on Nvidia DGX servers, which use dedicated GPUs. Games are a big enough industry for dGPUs to exist in the first place.

dojan ,
@dojan@lemmy.world avatar

What kind of disadvantages do you see?

BorgDrone ,

User replaceable RAM is slow, which means you can’t integrate the CPU and GPU in one package. This means a GPU with it’s own RAM, which has huge disadvantages.

Even a 4090 only has 24GB and slow transfers to/from VRAM. The GPU can only operate on data in VRAM, so anything you need it to work on you need to copy over the relatively slow PCIe bus to the GPU. Then once it’s done you need to copy the results back over the PCIe bus to system RAM for the CPU to be able to access it. This considerably slows down GPGPU tasks.

dojan ,
@dojan@lemmy.world avatar

Ah yeah, I see. That’s definitely a downside if you work with something where that becomes a factor.

scarabic ,

These days I don’t realistically expect my RAM requirements to change over the lifetime of the product. And I’m keeping computers longer than ever: 6+ years where it used to be 1 or 2.

People have argued millions of times on the internet that Apple’s products don’t meet people’s needs and are massively overpriced. Meanwhile they just keep selling like crazy and people love them. I think the issue comes from having pricing expectations set over the in race-to-the-bottom world of commoditized Windows/Android trash.

sugar_in_your_tea ,

I upgraded my personal laptop a year or so after I got it (started with 8GB, which was fine until I did Docker stuff), and I’m probably going to upgrade my desktop soon (16GB, which has been fine for a few years, but I’m finally running out). My main complaint about my work laptop is RAM (16GB I think; I’d love another 8-16GB), but I cannot upgrade it because it’s soldered, so I have to wait for our normal cycle (4 years; will happen next year). I upgraded my NAS RAM when I upgraded a different PC as well.

I don’t do it very often, but I usually buy what I need when I build/buy the machine and upgrade 3-4 years later. I also often upgrade the CPU before doing a motherboard upgrade, as well as the GPU.

Meanwhile they just keep selling like crazy and people love them. I think the issue comes from having pricing expectations set over the in race-to-the-bottom world of commoditized Windows/Android trash.

I might agree if Apple hardware was actually better than alternatives, but that’s just not the case. Look at Louis Rossmann’s videos, where he routinely goes over common failure cases that are largely due to design defects (e.g. display cable being cut, CPU getting fried due to a common board short, butterfly keyboard issues, etc). As in, defects other laptops in a similar price bracket don’t have.

I’ve had my E-series ThinkPad for 6 years, with no issues whatsoever. The USB-C charge port is getting a little loose, but that’s understandable since it’s been mostly a kids Minecraft device for a couple years now, and kids are hard on computers. I had my T-Mobile series before that for 5-ish years until it finally died due to water damage (a lot of water).

Apple products (at least laptops) are designed for aesthetics first, not longevity. They do generally have pretty good performance though, especially with the new Apple Silicon chips, but they source a lot of their other parts from the same companies that provide parts for the rest of the PC market.

If you stick to the more premium devices, you probably won’t have issues. Buy business class laptops and phones with long software support cycles. For desktops, I recommend buying higher end components (Gold or Platinum power supply, mid-range or better motherboard, etc), or buying from a local DIY shop with a good warranty if buying pre built.

Like anything else, don’t buy the cheapest crap you can, buy something in the middle of the price range for the features you’re looking for.

unexposedhazard ,

Common apple L

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • [email protected]
  • lifeLocal
  • goranko
  • All magazines