Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
freem
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Openai/692bdd2b-98b4-8012-bcf8-fa145b46c56a
(section)
Add languages
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
== gpt-5-1-thinking == === User: I am evaluating MacBook Air with 24gb and 32gb memory and also a MacBook Pro with 24/32 GB === I am evaluating MacBook Air with 24gb and 32gb memory and also a MacBook Pro with 24/32 GB Can you compare prompt processing and token generation speeds for running gptoss20b or models up to 15gb memory size both (moe and dense) Also compare prices for base 512gb memory for Australia retail prices === Assistant: Short answer: for GPT-OSS-20B / 15 GB models, the chip Air M4 vs Pro M4 Pro/Max matters a lot more for speed than 24 GB… === Short answer: for GPT-OSS-20B / ~15 GB models, the chip (Air M4 vs Pro M4 Pro/Max) matters a lot more for speed than 24 GB vs 32 GB RAM. RAM mostly affects whether it fits comfortably and how much you can multitask, not raw tokens/sec. Below is a hardware + speed + AU pricing comparison, assuming you’re using llama.cpp / LM Studio / Ollama with a 4-bit GGUF around 12–15 GB (dense or MoE). ==== I’ll stick to current Apple AU models: ==== * MacBook Air M4 (10-core CPU / 10-core GPU) - 24 GB / 32 GB unified memory options, 512 GB SSD or higher.Apple<ref>{{cite web|title=Apple|url=https://www.apple.com/au/macbook-air/specs/|publisher=apple.com|access-date=2025-12-05}}</ref> * MacBook Pro 14-inch M4 Pro (12-core CPU / 16-core GPU) - Base: 24 GB unified, 512 GB SSD.Apple<ref>{{cite web|title=Apple|url=https://www.apple.com/au/shop/buy-mac/macbook-pro/14-inch|publisher=apple.com|access-date=2025-12-05}}</ref> - Can be configured higher (36 GB, 48 GB, etc.), but those jump you into M4 Max or 16-core M4 Pro variants. There is no 32 GB exact config on M4 Pro – it goes 24 → 36/48+.Apple<ref>{{cite web|title=Apple|url=https://www.apple.com/au/macbook-pro/specs/|publisher=apple.com|access-date=2025-12-05}}</ref> I’ll treat “Pro 32 GB” as “Pro with the next memory tier up (36 GB/48 GB)”, because that’s what Apple actually sells. ==== We do have some real benchmarks: ==== * M4 Pro laptop: - ~33 tokens/sec for GPT-OSS-20B in one benchmark.DEV Community<ref>{{cite web|title=DEV Community|url=https://dev.to/czmilo/openai-gpt-oss-complete-guide-2025-first-reasoning-model-that-runs-on-laptops-285f|publisher=DEV Community|access-date=2025-12-05}}</ref> - ~44 tokens/sec, 0.40 s to first token in LM Studio on an M4 Pro MacBook Pro.ernestchiang.com<ref>{{cite web|title=ernestchiang.com|url=https://www.ernestchiang.com/en/notes/ai/openai-gpt-oss/|publisher=ernestchiang.com|access-date=2025-12-05}}</ref> * MacBook Air M4: - Public reports show roughly 16–24 tokens/sec for GPT-OSS-20B on an Air M4.X (formerly Twitter)<ref>{{cite web|title=X (formerly Twitter)|url=https://x.com/CyrusAiCo/status/1953795608537706746|publisher=X (formerly Twitter)|access-date=2025-12-05}}</ref> * Mac mini M4 Pro: - llama.cpp community benchmarks show ~57 tps for GPT-OSS-20B with an M4 Pro mini under tuned settings – a good upper bound for the same chip in a better-cooled chassis.GitHub<ref>{{cite web|title=GitHub|url=https://github.com/ggml-org/llama.cpp/discussions/15396|publisher=github.com|access-date=2025-12-05}}</ref> Putting this together, you can think in ballpark ranges (generation speed): | | | | | --- | --- | --- | | | | | | | | | | | | | | | | | : ==== Roughly: ==== * Prompt processing (ingest) is more bandwidth-heavy and can be a bit faster than generation. * Generation is what you feel in an interactive chat (tokens flowing out). Based on llama.cpp patterns & the M4 Pro benchmarks: | | | | | --- | --- | --- | | | | | | | | | | | | | These ingest numbers are extrapolated from the Mac mini M4 Pro 57 tps benchmark and general Apple-Silicon scaling, not direct tests on your exact configs.GitHub<ref>{{cite web|title=GitHub|url=https://github.com/ggml-org/llama.cpp/discussions/15396|publisher=github.com|access-date=2025-12-05}}</ref> ==== For models up to ~15 GB on disk, in 4-bit GGUF: ==== * Dense 20B (e.g. GPT-OSS-20B MXFP4, 12–16 GB) - Uses ~12–16 GB of unified memory. - Speed is close to the numbers above. * MoE models with similar file size - More total parameters but only some “experts” active per token. - For 15 GB MoE vs 15 GB dense, you typically see: - Similar or slightly slower prompt ingest (more routing overhead). - Generation that’s ~0.8×–1.1× dense speed, depending on how many experts fire per token and library optimisations. - On Apple Silicon, llama.cpp tends to scale well with both dense and MoE; MLX can be ~20% faster than llama.cpp in some tests.Reddit<ref>{{cite web|title=Reddit|url=https://www.reddit.com/r/LocalLLaMA/comments/1hcgfsb/comparing_distributed_inference_performance_on_m4/|publisher=reddit.com|access-date=2025-12-05}}</ref> Rule of thumb on these machines: * If it fits fully in RAM without swap and you’re not running lots of other stuff, a 15 GB MoE will behave similarly to GPT-OSS-20B in throughput, sometimes a bit slower. ==== Using Apple Australia online store RRP (non-education) as of today: ==== ===== From the Apple AU “Buy MacBook Air” page:Apple<ref>{{cite web|title=Apple|url=https://www.apple.com/au/shop/buy-mac/macbook-air|publisher=apple.com|access-date=2025-12-05}}</ref> ===== | | | | | --- | --- | --- | | | | | | | | | | | | | ===== From the Apple AU “Buy 14-inch MacBook Pro” page:Apple<ref>{{cite web|title=Apple|url=https://www.apple.com/au/shop/buy-mac/macbook-pro/14-inch|publisher=apple.com|access-date=2025-12-05}}</ref> ===== | | | | | --- | --- | --- | | | | | | | | | | | | | | | | | : ==== Given you’re: ==== * Running GPT-OSS-20B and other 12–15 GB models locally, * Likely using llama.cpp / LM Studio / Ollama fairly heavily, * And you care about prompt & generation speed, not just “can it load?”… ===== - MacBook Air M4, 24 GB / 512 GB - Runs GPT-OSS-20B 4-bit comfortably (it wants 11–17 GB).Ted Tschopp's Homepage<ref>{{cite web|title=Ted Tschopp's Homepage|url=https://tedt.org/OpenAIs-GPT-OSS-Explained/|publisher=Ted Tschopp's Homepage|access-date=2025-12-05}}</ref> - You’ll see ~16–22 tok/s in practice – absolutely usable, but not “snappy” under long contexts. - Great battery, lighter, cheaper by ~A$600 vs Pro. ===== * When to go 32 GB Air - If you want 20B + big context (32k) + lots of other apps open without swap. - Speed doesn’t jump, but consistency and headroom do. ===== - MacBook Pro 14, M4 Pro, 24 GB / 512 GB - Roughly 2× GPT-OSS-20B throughput vs Air (~30–45 tok/s). - Active cooling → much better sustained speed for long chats / high batch sizes. - If you often run multi-agent / multi-model setups, this is the better long-term play. ===== * Higher-mem Pro (M4 Max 36 GB+) - Worth it only if you plan to: - Run multiple 20B-class models concurrently, or - Push very long contexts + background tooling, or - Experiment with much larger models (e.g. GPT-OSS-120B shards, big MoE, etc.). - Otherwise, M4 Pro 24 GB is already a beast for 20B. ==== If I translate all of that into a blunt summary for your GPT-OSS-20B and <15 GB LLM work: ==== * “I want the best perf/$ for 20B on a laptop” → 14-inch MacBook Pro M4 Pro, 24 GB / 512 GB. - Big jump in tokens/sec and sustained throughput compared with any Air. * “I prioritise price, weight and battery but will still run GPT-OSS-20B” → MacBook Air M4, 24 GB / 512 GB. - Accept ~half the speed but still usable. * “I want to future-proof for heavier local workloads but stay portable” → Air M4 32 GB / 512 GB (CTO). - Same speed as the 24 GB Air but more headroom and less swapping. If you tell me which stack you’ll standardise on (llama.cpp vs MLX vs LM Studio / Ollama) and your typical context lengths, I can sketch a more precise “tokens/sec vs context size vs machine” table for your exact workflow.
Summary:
Please note that all contributions to freem are considered to be released under the Creative Commons Attribution-ShareAlike 4.0 (see
Freem:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)