🎉 Hey Gate Square friends! Non-stop perks and endless excitement—our hottest posting reward events are ongoing now! The more you post, the more you win. Don’t miss your exclusive goodies! 🚀
🆘 #Gate 2025 Semi-Year Community Gala# | Square Content Creator TOP 10
Only 1 day left! Your favorite creator is one vote away from TOP 10. Interact on Square to earn Votes—boost them and enter the prize draw. Prizes: iPhone 16 Pro Max, Golden Bull sculpture, Futures Vouchers!
Details 👉 https://www.gate.com/activities/community-vote
1️⃣ #Show My Alpha Points# | Share your Alpha points & gains
Post your
OpenAI Open Source PaperBench, Redefining Top AI Agent Evaluation
Jin10 data reported on April 3rd, at 1 AM today, OpenAI released a brand new AI Agent evaluation benchmark - PaperBench. This benchmark primarily assesses agents' capabilities in search, integration, and execution, requiring the reproduction of top papers from the 2024 International Conference on Machine Learning, including understanding of the paper content, code writing, and experimental execution. According to the test data released by OpenAI, currently, well-known large models' built agents are still unable to outperform top machine learning PhDs. However, they are very helpful in assisting learning and understanding research content.