Not really sure who this is for. With soldered RAM is less upgradeable than a regular PC.
AI nerds maybe? Sure got a lot of RAM in there potentially attached to a GPU.
But how capable is that really when compared to a 5090 or similar?
The 5090 is basically useless for AI dev/testing because it only has 32GB. Mind as well get an array of 3090s.
The AI Max is slower and finicky, but it will run things you’d normally need an A100 the price of a car to run.
But that aside, there are tons of workstations apps gated by nothing but VRAM capacity that this will blow open.
Useless is a strong term. I do a fair amount of research on a single 4090. Lots of problems can fit in <32 GB of VRAM. Even my 3060 is good enough to run small scale tests locally.
I’m in CV, and even with enterprise grade hardware, most folks I know are limited to 48GB (A40 and L40S, substantially cheaper and more accessible than A100/H100/H200). My advisor would always say that you should really try to set up a problem where you can iterate in a few days worth of time on a single GPU, and lots of problems are still approachable that way. Of course you’re not going to make the next SOTA VLM on a 5090, but not every problem is that big.
Fair. True.
If your workload/test fits in 24GB, that’s already a “solved” problem. If it fits in 48GB, it’s possibly solved with your institution’s workstation or whatever.
But if it takes 80GB, as many projects seem to require these days since the A100 is such a common baseline, you are likely using very expensive cloud GPU time. I really love the idea of being able to tinker with a “full” 80GB+ workload (even having to deal with ROCM) without having to pay per hour.
This is my use case exactly.
I do a lot of analysis locally, this is more than enough for my experiments and research. 64 to 96gb VRAM is exactly the window I need. There are analyses I’ve had to let run for 2 or 3 days and dealing with that on the cloud is annoying.
Plus this will replace GH Copilot for me. It’ll run voice models. I have diffusion model experiments I plan to run but are totally inaccessible locally to me (not just image models). I’ve got workloads that take 2 or 3 days at 100% CPU/GPU that are annoying to run in the cloud.
This basically frees me from paying for any cloud stuff in my personal life for the foreseeable future. I’m trying to localize as much as I can.
I’ve got tons of ideas I’m free to try out risk free on this machine, and it’s the most affordable “entry level” solution I’ve seen.
And even better, “testing” it. Maybe I’m sloppy, but I have failed runs, errors, hacks, hours of “tinkering,” optimizing, or just trying to get something to launch that feels like an utter waste of an A100 mostly sitting idle… Hence I often don’t do it at all.
One thing you should keep in mind is that the compute power of this thing is not like an A/H100, especially if you get a big slowdown with rocm, so what could take you 2-3 days could take over a week. It’d be nice if framework sold a cheap MI300A, but… shrug.
I don’t mind that it’s slower, I would rather wait than waste time on machines measured in multiple dollars per hour.
I’ve never locked up an A100 that long, I’ve used them for full work days and was glad I wasn’t paying directly.
Yeah, I agree that it does help for some approaches that do require a lot of VRAM. If you’re not on a tight schedule, this type of thing might be good enough to just get a model running.
I don’t personally do anything that large; even the diffusion methods I’ve developed were able to fit on a 24GB card, but I know with the hype in multimodal stuff, VRAM needs can be pretty high.
I suspect this machine will be popular with hobbyists for running really large open weight LLMs.
… but only OpenCL workloads, right?
No, it runs off integrated graphics, which is a good thing because you can have a large capacity of ram dedicated to GPU loads
Not exactly. OpenCL as a compute framework is kinda dead.
What types of compute can you run on an AMD GPU today?
Most CUDA or PyTorch apps can be run through ROCM. Your performance/experience may vary. ZLUDA is also being revived as an alternate route to CUDA compat, as the vast majority of development/intertia is with CUDA.
Vulkan has become a popular “community” GPU agnostic API, all but supplanting OpenCL, even though it’s not built for that at all. Hardware support is just so much better, I suppose.
There are some other efforts trying to take off, like MLIR-based frameworks (with Mojo being a popular example), Apache TVM (with MLC-LLM being a prominent user), XLA or whatever Google is calling it now, but honestly getting away from CUDA is really hard. It doesn’t help that Intel’s unification effort is kinda failing because they keep dropping the ball on the hardware side.
Not really sure who this is for.
Second sentence in the linked article.
Really framework ? Soldered ram ? How dissapointing
The CEO of Framework said that this was because the CPU doesn’t support unsoldered RAM. He added that they asked AMD if there was any way they could help them support removable memory. Supposedly an AMD engineer was tasked with looking into it, but AMD came back and said that it wasn’t possible.
Specifically AMD said that it’s achievable but you’ll be operating at approx 50% of available bandwidth, and that’s with LPCAMM2. SO/DIMMs are right out of the running.
Mostly this is AMDs fault but if you want a GPU with 96-110 GBs of memory you don’t really have a choice.
The Framework Desktop is powered by an AMD Ryzen AI Max processor, a Radeon 8060S integrated GPU, and between 32GB and 128GB of soldered-in RAM.
The CPU and GPU are one piece of silicon, and they’re soldered to the motherboard. The RAM is also soldered down and not upgradeable once you’ve bought it, setting it apart from nearly every other board Framework sells.
It’d raise an eyebrow if it was a laptop but it’s a freakin’ desktop. Fuck you framework.
insanely hostile response to something like this. they attempted to have these parts replaceable, AMD physically couldn’t do it. they’ve still made it as repairable as possible, and will without a doubt be more repairable than similar devices using this chipset. fucking relax, being reactionary without being informed is dumb.
We need to stop bending over backwards to lower our standards for the people making money off of us.
Have higher standards.
Don’t be a useful idiot.
You need to get a grip. Framework is a private company that has done very well by its customers and supporters so far. Stop being so fucking negative.
Oof, found the proud consumer.
They always get upset when you point out how they’re being taken advantage of.
They can never willingly take a bad deal, right? 😉
you seriously have issues. and you’re a dick on top of that.
you’re either a troll or just a shitty person.
Wow, calm down.
This is how you react when people say we should have higher standards? I guess you really are insecure about your purchasing habits.
The consumerism runs deep with you. Now you’re going to throw a tantrum trying to find ways to defend being taken for a ride 😎
will without a doubt be more repairable than similar devices
It’s a desktop, they’re repairable unless you solder something in.
using this chipset
They could’ve gone with any other chipset, making the whole thing irrelevant to begin with, but they couldn’t please the AI crowd that way.
By similar devices I obviously mean ITX PCs with similar chipsets. The average PC isn’t giving you 128GB of VRAM for $2k.
Local AI is a difficult thing to do right now, making a product to allow people to use AI without giving up their privacy is great.
I get the frustration with a system being so locked down, but if 32gb is the minimum I don’t really see the problem. This pc will be outdated before you really need to upgrade the ram to play new games.
It’s not just about upgrading. It’s also about being able to repair your computer. RAM likes to go bad and on a normal PC, you can replace it easily. Buy a cheap stick, take out the old RAM, put in the new one and you’ll have a working computer again. Quick & easy and even your grandpa is able to run Memtest and do a quick switch. But if you solder down everything, the whole PC becomes electronic waste as most people won’t be able to solder RAM.
Yeah, it totally fucks repairability. But it sounds like this is not something this company normally does, and not something they could control.
They should at least offer a superior warranty to cover such scenarios.
The hell you mean “not something they could control”? Their whole deal is making upgradeable, repairable devices and ram thats replaceable is no industry secret. Their options should have been make it work or dont make it at all.
If you read into it, this was a limitation on AMD’s part, which they tried to resolve. You don’t have to buy it, and the rest of their lineup should meet your expectations.
You know, they didn’t have to make a product with this specific chip. They did it anyway despite it being inherently incompatible with their former goals. And this is not about us customers, this is about Framework abandoning what they stood for and losing credibility in the process.
They kind of did. What other chip allows for 128 GB of VRAM or has that kind of iGPU?
🥱
Seriously that’s really disappointing. It really seems like investors decides that they needed to “diversify” their offering and they need something with AI now … Framework was on a good path imo but of course a repairable laptop only goes so far since people can repair it and don’t need to replace it every 2 years (or maybe just replace the motherboard) so if you want to grow you need to make more products …
I agree. We need less soldered RAM designs. I thought repairability was something they appreciated.
Well tbh they still do have repairable laptops, even new ones and all that, and the “excuse” is that the only way to properly use that specific AMD CPU is with that specific RAM and the non.soldered bus wasn’t enough, but still… i’ll stick to old #ThinkPads, thank you.
Honestly this is exactly the product I was waiting for minisforum to make. I think this is actually a pretty solid move.
Are they going to at least make memory modules available for those who want to solder their own?
You can order those directly from chip suppliers (mouser, digikey, arrow, etc.) for a lower cost than you could get them from framework. Also those are going to be very difficult to solder/desolder. You’re going to need a hot air station, and you need to pre-warm the board to manage the heat sink from the ground planes.
These little buggers are loud, right?
The Noctua fan option should be pretty quiet.
I have a Noctua fan in my PC. Quiet AF. I don’t hear it and it sites beside me.
Hmm, probably not. I think it just has the single 120mm fan that probably doesn’t need to spin up that fast under normal load. We’ll have to wait for reviews.
I also just meant given the size constraints in tiny performance PCs. More friction in tighter spaces means the fans work harder to push air. CPU/GPU fans are positioned closer to the fan grid than on larger cases. And larger cases can even have a bit of insulation to absorb sound better. So, without having experimented with this myself, I would expect a particularly small and particularly powerful (as opposed to efficient) machine to be particularly loud under load. But yes, we’ll have to see.
This is one stupid product. It really goes against everything the framework brand has identified with.
Desktops are already that, though. In order for them to distinguish themselves in the industry, they can’t just offer another modular desktop PC. They can’t offer prebuilts, or gaming towers, or small form factor units, or pre-specced you-build kits. They can’t even offer low-cost micro-desktops. All of those markets are saturated.
But they can offer a cheap Mac Studio alternative. Nobody’s cracked that nut yet. And it remains to be seen if this will be it, but it certainly seems like it’s lined up to.
I’m not super well informed, but a socketable AMD nuc form factor machine would’ve been nice, single pcie, m.2 and 2 sodimm ram slots would’ve been good. Could’ve even given the option to route the pcie slot externally and offered an add on egpu case that’s actually worth a damn a la mega drive/sega cd.
It’s a straight up gimmick flanderizing the brand identity.
I’d argue not. It’s as modular/repairable as the platform can be (with them outright stating the problematic soldered RAM), and not exorbitantly priced for what it is.
But what I think is most “Framework” is shooting for a niche big OEMs have completely flubbed or enshittified. There’s a market (like me) that wants precisely this, not like a framework-branded gaming tower or whatever else a desktop would look like.
It’s as modular/repairable as the platform can be
It can’t be. That’s the point.
AMD said no due to the platform and apparently the signal integrity not being up to snuff.
…said no to what?
Modular Ram modules (e.g. dram and I believe lpcamm)
Soldered ram is more efficient because it does not require big connectors and is closer to the CPU and GPU. 3D Vcache Is the ultimate examples or this.
Yes I’m aware. What’s your point?
I guess I’m not sure what you want Framework to due instead. Just not launch this at all? What alternative are you advocating for?
So can someone who understands this stuff better than me explain how the L3 cache would affect performance? My X3D has a 96 MB cache, and all of these offerings are lower than that.
This has no X3D, the L3 is shared between CCDs. The only odd thing about this is it has a relatively small “last level” cache on the GPU/Memory die, but X3D CPUs are still kings of single-threaded performance since that L3 is right on the CPU.
This thing has over twice the RAM bandwidth of the desktop CPUs though, and some apps like that. Just depends on the use case.
and some apps like that.
I’d wager a guess: AI?
This is a standard a370 mini PC at a high price.
There’s Beelink, Minisforum, Aoostar and many others.
The AI max chips are a completely different platform, more than double the physical silicon size of most minipc chips.
Most miniPC vendors have already announced AI Max products:
https://www.gmktec.com/blog/gmktec-a-global-leader-in-ai-mini-pcs-unveils-the-amd-ryzen-ai-max-395
But Framework released it now.
Nope, they’re just available to pre-order with an estimate of shipping from Q3 this year. They’re not shipping now.
I just checked, the gmktec link above says that their product is supposed to ship in Q1-Q2 this year, earlier than framework’s.The person above you confidently posted that it’s shipping now and got an up vote despite being wrong.
I never understand people who confidently post wrong things that are easily googlable.
“To enable the massive 256GB/s memory bandwidth that Ryzen AI Max delivers, the LPDDR5x is soldered,” writes Framework CEO Nirav Patel in a post about today’s announcements. “We spent months working with AMD to explore ways around this but ultimately determined that it wasn’t technically feasible to land modular memory at high throughput with the 256-bit memory bus. Because the memory is non-upgradeable, we’re being deliberate in making memory pricing more reasonable than you might find with other brands.”
😒🍎
Edit: to be clear, I was only trying to point out that “we’re being deliberate in making memory pricing more reasonable than you might find with other brands” is clearly targeting the Mac Mini, because Apple likes to price-gouge on RAM upgrades. (“Unamused face looking at Apple,” get it? Maybe I emoji’d wrong.) My comment is not meant to be an opinion about the soldered RAM.
Would 256GB/s be too slow for large llms?
It runs on the gpu
Many LLM operations rely on fast memory and gpus seem to have that. Even though their memory is soldered and vbios is practically a black box that is tightly controlled. Nothing on a GPU is modular or repairable without soldering skills(and tools).
Yeah hugely disappointed by this tbh. They should have made a gaming capable steam machine in cooperation with valve instead :)
Yeah.
But that’s AMD’s fault, as they gimped the GPU so much on the lower end. There should be a “cheap” 8-core, 1-CCD part with close to the full 40 CUs… But there is not.
This is an AI chip designed primarily for running AI workflows. The fact that it can game is secondary
Yeah exactly, its worthless… Even the big players already admit to the AI hype being over. This is the worst possible thing to launch for them, its like they have no idea who their customers are.
I mean, it’s not. You can do aí workflows with this wonderful chip.
If you wanna game, go buy nvidia
The AI hype being over doesn’t mean no one is working on AI anymore. LLMs and other trained models are here to stay whether you like it or not.
They still could; this seems aimed at the AI/ML research space TBH
To be fair it starts with 32GB of RAM, which should be enough for most people. I know it’s a bit ironic that Framework have a non-upgradeable part, but I can’t see myself buying a 128GB machine and hoping to raise it any time in the future.
If you really need an upgradeable machine you wouldn’t be buying a mini-PC anyways, seems like they’re trying to capture a different market entirely.
According to the CEO in the LTT video about this thing it was a design choice made by AMD because otherwise they cannot get the ram speed they advertise.
Which is fine, but there was no obligation for Framework to use that chip either.
In the same video it’s pointed out that this product wouldn’t exist at all without the AMD chip. It’s literally built around it.
Suppose the counter is that the market is chock full of modular options to build a system without framework.
In the laptop space, it’s their unique hook in a market that is otherwise devoid of modularity. In the desktop space, even the mini itx space, framework doesn’t really need to be serving that modularity requirement since it is so well served already. It might make it so I’m likely to ignore it completely, but I’m not going to be super bothered when I have so many other options
My biggest gripe about non replaceable components is the chance that they’ll fail. I’ve had pretty much every component die on me at some point. If it’s replaceable it’s fine because you just get a new component, but if it isn’t you now have an expensive brick.
I will admit that I haven’t had anything fail recently like in the past, I have a feeling the capacitor plague of the early 2000s influenced my opinion on replaceable parts.
I also don’t fall in the category of people that need soldered components in order to meet their demands, I’m happy with raspberry pis and used business PCs.
You can get an MS-A1 barebones from minisforum right now for like 215 - BYO cpu, ddr5, and m2. But it’s got oculink on the back (the pcie dock is 100, but not mandatory if you’re not going to use it). I think it’s supposed to be on sale for another couple days.
seems like they’re trying to capture a different market entirely.
Yes that’s the problem.
That they want to sell cheap ai research machines to use for workstation?
That’s a poor attempt to knowingly misrepresent my statement.
No, it is a question
The answer is that they’re abandoning their principles to pursue some other market segment.
Although I guess it could be said to be like Porsche and Lamborghini selling SUVs to support the development of their sports cars…
I don’t understand how that answers my question
Well, more specifically: why didn’t they try to go for LPCAMM?
Because you’d get like half the memory bandwidth to a product where performance is most likely bandwidth limited. Signal integrity is a bitch.
From what I understand, they did try, but AMD couldn’t get it to work because of signal integrity issues.
Calling it a gaming PC feels misleading. It’s definitely geared more towards enterprise/AI workloads. If you want upgradeable just buy a regular framework. This desktop is interesting but niche and doesn’t seem like it’s for gamers.
I think it’s like Apple-Niche
Now, can we have a cool European company doing similar stuff? At the rate it’s going I can’t decide whether I shouldn’t buy American because I don’t want to support a fascist country or because I’m afraid the country might crumble so badly that I can’t count on getting service for my device.
Wait I thought they were a Taiwanese company?
This comment made me double check. They’re from San Francisco: https://en.m.wikipedia.org/wiki/Framework_Computer
I’d prefer to buy taiwanese tbh. 😉
I could envision MNT Research trying this in the future, but not for now.
It’s kinda cool but seems a bit expensive at this moment.
For the performance, it’s actually quite reasonable. 4070-like GPU performance, 128gb of memory, and basically the newest Ryzen CPU performance, plus a case, power supply, and fan, will run you about the same price as buying a 4070, case, fan, power supply, and CPU of similar performance. Except you’ll actually get a faster CPU with the Framework one, and you’ll also get more memory that’s accessible by the GPU (up to the full 128gb minus whatever the CPU is currently using)
I swear, you people must be paid to shill garbage.
Always a response for anyone who has higher standards, lol.
“It’s too expensive”
“It’s actually fairly priced for the performance it provides”
“You people must be paid to shill garbage”
???
Ah yes, shilling garbage, also known as: explaining that the price to performance ratio is just better, actually.
I feel like this is a big miss by framework. Maybe I just don’t understand because I already own a Velka 3 that i used happily for years and building small form factor with standard parts seems better than what this is offering. Better as in better performance, aesthetics, space optimization, upgradeability - SFF is not a cheap or easy way to build a computer.
The biggest constraint building in the sub-5 liter format is GPU compatibility because not many manufacturers even make boards in the <180mm length category. Also can’t go much higher than 150-200 watts because cooling is so difficult. There are still options though, i rocked a PNY 1660 super for a long time, and the current most powerful option is a 4060ti. Although upgrades are limited to what manufacturers occasionally produce, it is upgradeable, and it is truly desktop performance.
On the CPU side, you can physically put in whatever CPU you want. The only limitation is that the cooler, alpenfohn black ridge or noctua l9a/l9i, probably won’t have a good time cooling 100+ watts without aggressive undervolting and power limits. 65 watts TDP still gives you a ryzen 7 9700x.
Motherboards have the SFF tax but are high quality in general. Flex ATX PSUs were a bit harder to find 5 or 6 years ago but now the black 600W enhance ENP is readily available from Velkase’s website. Drives and memory are completely standard. m.2 fits with the motherboard, 2.5in SATA also fits in one of the corners. Normal low profile DDR5 is replaceable / upgradeable.
What framework is releasing is more like a laptop board in a ~4 liter case and I really don’t like that in order to upgrade any part of CPU, GPU or memory you have to replace the entire board because it’s soldered on APU and not socketed or discrete components. Framework’s enclosure hasn’t been designed to hold a motherboard+discrete GPU and the board doesn’t have a PCIe slot if you wanted to attach a card via riser in another case. It could be worse but I don’t see this as a good use of development resources.
I think the biggest limiting factor for your mini PC will always be the VRAM and any workload that enjoys that fast RAM speed. Really, I think this mini PC from framework is only sensible for certain workloads. It was poised as a mobile chip and certainly is majorly power efficient. On the other hand I don’t think it is for large scaling but more for testing at home or working at home on the cheap. It isn’t something I expected from framework though as I expected them to maintain modularity and the only modularity here is the little USB cards and the 3D printed front panel designs lol
Edit
Personally I am in that niche market of high RAM speed. Also, access to high VRAM for occasional LLM testing. Though it is an AMD and I don’t know if am comfortable switching from Nvidia for that workload just yet. Renting a GPU is just barely cheap enough.
I really hope this won’t be too expensive. If it’s reasonably affordable i might just get one for my living room.
they already announced pricing for them.
1099 for the base ai max model with 32gb(?), 1999 for fully maxed with the top sku.
Bummer
$1k for the base isn’t horrible IMO, especially if you compare it to something like the mac mini starting at $600 and ballooning over $1k to increase to 32GB of “unified memory” and 1tb of storage.
I get why people are mad about the non-upgradable memory but tbh I think this is the direction the industry is going to go as a whole. They can’t get the memory to be stable and performant while also being removable. It’s a downside of this specific processor and if people want that they should just build a PC
And the “base” of this is physically more like a cut down M4 Pro than a regular M4.
i actually think its not the worst priced framework product ironically. Prebuilt 1k pcs tend to be something like a high end cpu + 4060 desktop anyways, so specs wise, its relatively speaking, reasonable. take for example cyberpower pcs build here, which is of the few oems iirc Gamers Nexus thinks doesn’t charge as much of a SI tax on assembly. it’s acutally not incredibly far off performance wise. I’d argue its the most value Framework product per dollar ironically.
Prebuilt 1k pcs tend to be something like a high end cpu + 4060 desktop anyways
That value proposition evaporates when you factor in repairability and upgradability of those prebuilts.
and if you actually want a PC for gaming on, a discrete gpu (eg: 7900xt) is going to be at least 3x faster at throwing polygons around than the 8060S. This thing is definitely better for AI workloads than gaming.
With a cheeky comparison to Apple’s nearly $5k offering.
Much like their laptops, I’m all for the idea, but what makes this desirable by those of us with no interest in AI?
I’m out of that loop though I get that AI is typically graphics processing heavy, can this be taken advantage of with other things like video rendering?
I just don’t know exactly what an AI CPU such as the Ryzen AI Max offers over a non-AI equivalent processor.
I hate how power hungry the regular desktop platform is so having capable APUs like this that will use less power at full load than a comparable CPU+GPU combo at idle, is great, though it needs to become a lot more affordable.
what makes this desirable by those of us with no interest in AI?
Juat maybe not all products need to be for everyone.
Sometimes it’s fine if a product fits your label of “Not for me”.Much like their laptops
Its nothing like their laptops, thats the issue :/ Soldered in stuff all around, nonstandard parts that make it useless for use as a standard PC or gaming console.
Sorry, I was stating that “much like their laptops, I like the idea of these desktops.” I was not trying to insinuate that they themselves are alike.
There’s lots of workstation niches that are gated by VRAM size, like very complex rendering, scientific workloads, image/video processing… It’s not mega fast, but basically this can do things at a reasonable speed that you’d normally need a $20K+ computer to even try. Like, if something takes hours on an A6000 Ada or an A100, just waiting overnight on one of these is not a big deal. Cashing or failing to launch on a 4090 or 7900 XTX is.
That aside, the IGP is massively faster than any other integrated graphics you’ll find. It’s reasonably power efficient.
There is a massive push right now for energy efficient alternatives to nvidia GPUs for AI/ML. PLENTY of companies are dumping massive amounts of money on macs and rapidly learning the lesson the rest of us learned decades ago in terms of power and performance.
The reality is that this is going to be marketed for AI because it has an APU which, keeping it simple, is a CPU+GPU. And plenty of companies are going to rush to buy them for that and a very limited subset will have a good experience because they don’t have time sensitive operations.
But yeah, this is very much geared for light-moderate gaming, video rendering, and HTPCs. That is what APUs are actually good for. They make amazing workstations. I could also see this potentially being very useful for a small business/household local LLM for stuff like code generation and the like but… those small scale models don’t need anywhere near these resources.
As for framework being involved: Someone has kindly explained to me that even though you have to replace the entire mobo to increase the amount of memory, you can still customize your side panels at any moment so I guess that is fitting the mission statement.
For modularity: There’s also modular front I/O using the existing USB-C cards, and everything they installed uses standard connectors.