Jump to content
Main menu
Main menu
move to sidebar
hide
Navigation
Main page
Recent changes
Random page
freem
Search
Search
Appearance
Create account
Log in
Personal tools
Create account
Log in
Pages for logged out editors
learn more
Contributions
Talk
Editing
Openai/69191dd5-6360-800d-b47b-7ebcc8b69dc8
(section)
Add languages
Page
Discussion
English
Read
Edit
Edit source
View history
Tools
Tools
move to sidebar
hide
Actions
Read
Edit
Edit source
View history
General
What links here
Related changes
Special pages
Page information
Appearance
move to sidebar
hide
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
===== - Meta reports Llama-3.1 405B was trained on 15T+ tokens, using 39.3M H100-80GB GPU-hours (700 W TDP class). Meta AI<ref>{{cite web|title=Meta AI|url=https://ai.meta.com/blog/meta-llama-3-1/|publisher=Meta AI|access-date=2025-11-16}}</ref> ===== * Architecture: vanilla giant decoder-only Transformer (no MoE), extended context, etc. datacamp.com<ref>{{cite web|title=datacamp.com|url=https://www.datacamp.com/blog/llama-3-1-405b-meta-ai|publisher=datacamp.com|access-date=2025-11-16}}</ref> * Rubin VR-200 is Nvidia’s next-gen data-center training GPU in the Rubin family: - Positioned as the workhorse training GPU alongside Rubin CPX (long-context prefill) and R100 decode parts. Futurum<ref>{{cite web|title=Futurum|url=https://futurumgroup.com/insights/nvidias-new-rubin-cpx-targets-future-of-large-scale-inference/|publisher=futurumgroup.com|access-date=2025-11-16}}</ref> - VR-200-based platforms (e.g. NVL144 / Rubin + Vera racks) are targeted at multi-exaflop FP4/FP8 training/inference with extreme rack-level power (approaching 1 MW racks). NVIDIA Newsroom<ref>{{cite web|title=NVIDIA Newsroom|url=https://nvidianews.nvidia.com/news/nvidia-unveils-rubin-cpx-a-new-class-of-gpu-designed-for-massive-context-inference|publisher=NVIDIA Newsroom|access-date=2025-11-16}}</ref> * Exact per-GPU TFLOPs for VR-200 aren’t public yet; safe assumption is significantly above Blackwell GB200 and well above H100. So the “standard Llama-3 400B on Rubin” scenario is: :
Summary:
Please note that all contributions to freem are considered to be released under the Creative Commons Attribution-ShareAlike 4.0 (see
Freem:Copyrights
for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource.
Do not submit copyrighted work without permission!
Cancel
Editing help
(opens in new window)