What do you think of people buying Mac mini's to run AI?

Today during Apple's earnings call we heard they're out of mac mini's and back ordered due to huge surge in demand.

We know barely any decent size models can run on it. Most of the buyers are buying to run Openclaw in complete isolation from their regular desktops or laptops.

A VPS for few dollars/month will be sufficient to run tools like Openclaw.

What are your thoughts on why people buying mac minis?

5 points

namegulf

2 days ago


20 comments

spqw 2 days ago

openclaw fills the gap left by Siri and was built with tight integration with Apple ecosystem.

this how i understand the hype for something like openclaw vs the capabilities provided by zapier or n8n for years now.

i would say a majority of users are tech and non tech roles, who (1) have an iPhone with all their contacts and data on iCloud (2) have a macbook at work and use macOS daily (3) use ChatGPT or Claude daily and trust it with their personal data (4) aren't familiar with Linux or a VPS and don't trust themselves with setting it up through the terminal (5) feel more at ease with a "a second macOS that i can debug visually on my monitor at home" rather than a remote linux VPS

you could still rent a mac mini but cloud providers will ask you $119 a month for a Mac mini M4 with 16GB of RAM. $599 is unbelievably cheap for a second computer with which you can do anything you can do on your usual Macbook

  • namegulf 2 days ago

    That's a interesting usecase

    Yeah four months of renting vs buying, speaks for itself

sergeivaskov 2 days ago

The premise that 'barely any decent size models can run on it' misses the biggest advantage of Apple Silicon: Unified Memory. Where else can you get a machine with 64GB or 128GB of VRAM for running quantized models at this price point? Buying the equivalent VRAM in Nvidia GPUs (like multiple RTX 3090s/4090s) would cost thousands of dollars, draw massive power, and sound like a jet engine. The Mac Mini is dead silent, sips power, and lets you run 70B+ parameter models locally via llama.cpp. It's currently the undisputed king of VRAM-per-dollar for local inference.

  • necovek a day ago

    I believe AMD Ryzen AI Max 395+ with 128GB of RAM is the "undisputed king of VRAM-per-dollar", but Macs are slightly better performing for local inference.

    • namegulf 10 hours ago

      This is interesting.

      What kind of models have you run on this configuration?

  • namegulf 2 days ago

    When released unified memory was a issue since they are not swappable with a higher memory sticks.

    Now it's game changer when it comes to AI, you're right, it delivers performance for local inference

vlad_omniforge 2 days ago

I think the main reason people are buying mac minis is because of how much user friendly it is.

You can expect a software engineer or a devops guy to run stuff in a VPS but a slightly less technical person won't ever go there. In contrast, people are familiar with macOS and that's way less scary to setup.

The added benefit of the mac mini is that it can also double down as a second device one could use for something else too

  • namegulf 2 days ago

    Yep, it's a well designed tiny, affordable and powerful mac that appeals to even non-technical folks

Riddler3000 2 days ago

They probably hope to run a LLM locally on the Mac mini but they don't realize that decent models require much more computational power

  • namegulf 2 days ago

    Yup, that's the point.

    Most don't realize that until they come across configuring openclaw or other agentic frameworks and then realize they've to use anthropic or openai via API (additional cost)

__patchbit__ 2 days ago

The AI techniques that fit on the mac mini and are accessible due to Apple's hci advantage are worth it to people who pay to solve their niche problem patterns.

  • namegulf 2 days ago

    True, a good use case

tfwnopmt 2 days ago

It seems to me they've had enough foresight to prevent their openclaw credentials from being stolen via copy-fail

  • necovek 2 days ago

    Can you elaborate? Has it been shown that copy-fail can break through KVM boundaries in Linux?

    I understood it to be local-only (which is more likely to affect containers, bit I don't thinkmthat was demonstrated either).

ex-aws-dude a day ago

It’s just so it can access iMessage

That’s the whole reason

  • namegulf 10 hours ago

    Can you elaborate a bit about this?

jeraldbenny 2 days ago

Ai can even run in raspberry pi

  • namegulf 2 days ago

    Are you talking about running agents or models?

paulcole 2 days ago

I’ll let you know when I get mine. Ordered April 1 and not expected until August.

  • namegulf 2 days ago

    Yeah it's all back ordered due to huge surge in demand

    You may also check ebay