## Sources

1. [Eating My Own Dog Food: How I Used the Framework to Write the Post About the Framework](https://www.oreilly.com/radar/eating-my-own-dog-food-how-i-used-the-framework-to-write-the-post-about-the-framework/)
2. [The Organization Is the Bottleneck](https://www.oreilly.com/radar/the-organization-is-the-bottleneck/)

---

### **Eating My Own Dog Food: How I Used the Framework to Write the Post About the Framework** by Marc Millstone

In this article, Marc Millstone applies his own engineering framework—which matches **AI autonomy to business risk and competitive differentiation**—to the actual process of writing the article itself [1]. He argues that for AI to be effective, its autonomy must be balanced by **sufficient human understanding** [2].

**Main Arguments and Framework Application**
*   **The Four-Quadrant Model:** Millstone categorizes tasks into four quadrants based on risk and differentiation to determine how much autonomy to give the AI [1, 3].
    *   **Full Automation:** This was used for mechanical tasks in the "bottom-left" quadrant, such as **formatting eighteen footnotes** and ensuring URL consistency [3]. The risk was low because errors could be easily fixed in editing [3].
    *   **Collaborative Co-Creation:** Used for structural elements like the "build-versus-buy" framing [4, 5]. While Claude (the AI) proposed analogies, the author made the product decisions and "drove the design choice" to ensure the argument held together [4, 5].
    *   **Supervised Automation:** Applied to the counterargument section [6]. The AI drafted potential objections, but the author performed **rigorous verification** to ensure the "steelman" versions of those arguments were fair and not just convenient strawmen [2].
    *   **Human-Led Craftsmanship:** This quadrant includes high-differentiation, high-risk work that the author owned entirely [7]. This included personal anecdotes, **defining the core dimensions of the framework** (risk and differentiation), and selecting the initial evidence base of trusted studies [7-9].

**Key Takeaways and Important Details**
*   **AI as an Adversarial Critic:** Millstone found the most value in using AI to **stress-test his logic** [10]. He suggests using specific, "brutal" prompts—such as telling the AI to act as a "pro-AI, token-maxing CTO"—to get direct, unhedged feedback that surfaces logical gaps [10, 11].
*   **The Danger of "Generic" Voice:** AI models default to a recognizable, polished register characterized by "rule-of-three" lists and words like "delve" or "leverage" [12]. Millstone emphasizes that a writer must manually rewrite AI output to maintain a **genuine, practical brand voice** and prevent the reader from losing trust [12, 13].
*   **The Necessity of Source Verification:** AI-generated citations are often "quietly broken" [14]. Millstone notes that AI often misattributes claims or allows figures (like the Knight Capital loss) to "drift" across drafts, requiring the human author to **reverify every structural source against primary documents** [15, 16].
*   **The Goal of AI Use:** The objective is **interrogative use rather than delegation** [17]. The author must retain a mental model of the work to be able to explain or defend it, much like an engineer must be able to explain code during an incident review [14, 18].

***

### **The Organization Is the Bottleneck** by Sarah Wells

Sarah Wells argues that while engineers are writing code faster than ever due to AI, organizations are not necessarily delivering value faster because **organizational maturity is the primary bottleneck** [19].

**Main Arguments**
*   **AI as an Amplifier:** AI does not fix underlying problems; rather, it **magnifies the existing strengths** of high-performing organizations and the **dysfunctions** of struggling ones [20].
*   **Foundational Parallels with Microservices:** The practices required to make microservices successful—such as **automated testing, guardrails, and active ownership**—are exactly the same foundations needed to make AI coding agents effective [19, 21].
*   **Culture Over Technology:** Success in software delivery is less about the specific technology choices and more about the **cultural and organizational setup** that allows teams the autonomy to move fast with confidence [20].

**Key Takeaways and Important Details**
*   **Guardrails for Autonomy:** Just as microservices require "paved roads" to prevent autonomy from turning into chaos, AI agents need constraints [21]. Artifacts like **coding standards, architectural decision records, and service templates** serve as the necessary context to keep autonomous agents on track [21].
*   **The Deployment Pipeline as a Safety Net:** Robust CI/CD pipelines, including **automated tests and progressive rollouts**, are essential for catching mistakes made by both humans and AI before they reach production [22].
*   **Importance of Observability:** Code generated by AI must be treated with the same (or higher) level of scrutiny as human code [22]. This requires **logs, metrics, and traces** to understand what changed and why, alongside independent deployability to allow for quick reversals when an agent makes a mistake [22].
*   **Engineering Enablement:** Platform teams play a crucial role by providing the libraries and "golden paths" that AI agents use as constraints [23]. Organizations that haven't invested in **enablement** will find that AI only "amplifies the mess" [23].