📢 Exclusive on Gate Square — #PROVE Creative Contest# is Now Live!
CandyDrop × Succinct (PROVE) — Trade to share 200,000 PROVE 👉 https://www.gate.com/announcements/article/46469
Futures Lucky Draw Challenge: Guaranteed 1 PROVE Airdrop per User 👉 https://www.gate.com/announcements/article/46491
🎁 Endless creativity · Rewards keep coming — Post to share 300 PROVE!
📅 Event PeriodAugust 12, 2025, 04:00 – August 17, 2025, 16:00 UTC
📌 How to Participate
1.Publish original content on Gate Square related to PROVE or the above activities (minimum 100 words; any format: analysis, tutorial, creativ
Hard to say how much it will affect their bottom line, but AMD's new "Halo Strix" processor is now the best system for anyone wanting to run a local LLM.
Can get one of these with 128GB of RAM for like $1.5k all together and it will run mid/large models like gpt-oss 120B perfectly.
For reference a single Nvidia 5090 (without the rest of the PC) is $2-3k and cannot run that full model.
A mac mini with 64GB of RAM is $2k and also can't run that full model
A mac studio with 128GB of RAM can run it but costs $3.7k