## Sources

1. [Anthropic's 81K Study: AI Hopes, Fears, and the Gap](https://awesomeagents.ai/news/anthropic-81k-study-ai-hopes-fears-2026/)
2. [Cursor's Composer 2 Is Kimi K2.5 With RL - And No Attribution](https://awesomeagents.ai/news/cursor-composer-2-kimi-k25-license-violation/)
3. [MiniMax M2.7 Claims to Automate Its Own Training](https://awesomeagents.ai/news/minimax-m2-7-self-evolving-model/)

---

### **Anthropic's 81K Study: AI Hopes, Fears, and the Gap** by Elena Marchetti

*   Anthropic conducted a massive qualitative study involving 80,508 Claude users across 159 countries and 70 languages to understand global AI sentiment [1, 2].
*   The top aspiration for users is "professional excellence" (18.8%), with people primarily wanting AI to handle repetitive tasks so they can free up personal time and leave work on time [2, 3].
*   The primary fear among users is AI unreliability and hallucinations (26.7%), which surprisingly outranks concerns over job displacement (22.3%) and the loss of human autonomy (21.9%) [2-4].
*   A major analytical takeaway is the "light and shade" pattern, which shows that the individuals who benefit the most from AI are often the ones who fear its risks the most; for example, users who value AI for emotional support are three times more likely to fear becoming dependent on it [5, 6].
*   The study uncovered a deep regional divide regarding AI sentiment: users in the Global South (like Sub-Saharan Africa and South Asia) view AI optimistically as an economic equalizer, while users in the Global North and East Asia are far more concerned with governance, privacy, and cognitive atrophy [7-9].
*   While 67% of participants expressed a net positive sentiment, the methodology has significant sampling caveats; the study only interviewed existing, active Claude users, which inherently excludes those who abandoned the tool because they found it unreliable or harmful [2, 10, 11].
*   Because the interviews were conducted in December 2024 but published in March 2026, the study reflects experiences with older AI models and may not accurately represent user experiences with the highly capable Claude 4.6 models currently available [11].

### **Cursor's Composer 2 Is Kimi K2.5 With RL - And No Attribution** by Daniel Okafor

*   Cursor released its highly capable proprietary coding model, Composer 2, but failed to disclose that it was built on top of an open-weight base model [12, 13].
*   A developer discovered a leaked model ID (`kimi-k2p5-rl-0317-s515-fast`) hidden in Cursor's API, revealing that Composer 2 is actually a fine-tuned version of Moonshot AI's Kimi K2.5 model [12, 14, 15].
*   Moonshot AI accused Cursor of violating the Kimi K2.5 Modified MIT License, which strictly requires prominent UI attribution for any commercial product using the model that exceeds 100 million monthly active users or $20 million in monthly revenue [12, 16].
*   With an estimated $167 million in monthly revenue and a $29.3 billion valuation, Cursor exceeded the license's revenue threshold by roughly eight times, yet their UI mentioned "Composer 2" with no credit to Kimi [16, 17].
*   Following the leak, Cursor admitted to using the open-source base, defending their model by claiming that 75% of the computational effort came from their own reinforcement learning training, while only 25% was from the base model [17, 18].
*   The dispute was resolved when Cursor committed to upfront attribution for future models and Moonshot accepted Cursor's compliance through its inference partner, Fireworks AI [18].
*   The incident proves that open-weight licenses are enforceable against major corporate players, but highlights that AI transparency still relies heavily on whistleblowers and community pressure to uncover hidden base models [19, 20].

### **MiniMax M2.7 Claims to Automate Its Own Training** by Elena Marchetti

*   MiniMax released M2.7, a massive 2,300 billion parameter Mixture of Experts (MoE) model with a 200K token context window, which claimed the number one spot out of 136 models on the Artificial Analysis Intelligence Index [21, 22].
*   The model demonstrated elite capabilities for autonomous software engineering, scoring 78% on SWE-bench Verified (matching GPT-5.3-Codex), and features native structured multi-agent collaboration called "Agent Teams" [22, 23].
*   MiniMax heavily marketed the model as "self-evolving," claiming that M2.7 autonomously runs 30% to 50% of its own reinforcement learning research workflow across over 100 optimization iterations [21, 24].
*   However, this claim is a bounded engineering achievement rather than science-fiction-style recursive self-improvement; it simply means the model acts as an agent within a controlled reinforcement learning pipeline to read logs, debug, and adjust hyperparameters [24-26].
*   Despite strong benchmark scores, M2.7 suffers from significant performance drawbacks, including slow processing speeds (benchmarked at 49.7 tokens per second compared to a claimed 100 TPS) and extreme verbosity, generating roughly four times the output volume of comparable models, which drastically increases API costs [27, 28].
*   The model struggles operationally during complex agentic workflows, showing a tendency to terminate tasks early as it approaches its context window limits [29].
*   There remains an unresolved controversy surrounding M2.7's origins, as MiniMax was previously implicated by Anthropic in a distillation attack involving 13 million fraudulent exchanges, raising unconfirmed suspicions about how much of M2.7's capability was independently engineered versus extracted from Claude [29, 30].