📢 #Gate Square Writing Contest Phase 3# is officially kicks off!
🎮 This round focuses on: Yooldo Games (ESPORTS)
✍️ Share your unique insights and join promotional interactions. To be eligible for any reward, you must also participate in Gate’s Phase 286 Launchpool, CandyDrop, or Alpha activities!
💡 Content creation + airdrop participation = double points. You could be the grand prize winner!
💰Total prize pool: 4,464 $ESPORTS
🏆 First Prize (1 winner): 964 tokens
🥈 Second Prize (5 winners): 400 tokens each
🥉 Third Prize (10 winners): 150 tokens each
🚀 How to participate:
1️⃣ Publish an
Microsoft Open Source New Version of Phi-4: Inference Efficiency Rises 10 Times, Can Run on Laptops
Jin10 data reported on July 10, this morning, Microsoft open sourced the latest version of the Phi-4 family, Phi-4-mini-flash-reasoning, on its official website. The mini-flash version continues the Phi-4 family’s characteristics of small parameters and strong performance, specifically designed for scenarios limited by Computing Power, memory, and latency, capable of running on a single GPU, suitable for edge devices like laptops and tablets. Compared to the previous version, mini-flash utilizes Microsoft’s self-developed innovative architecture, SambaY, resulting in a big pump in inference efficiency by 10 times, with average latency reduced by 2-3 times, achieving a significant improvement in overall inference performance.