## Sources

1. [Show Your Work: The Case for Radical AI Transparency](https://www.oreilly.com/radar/show-your-work-the-case-for-radical-ai-transparency/)
2. [Emergency Pedagogical Design: How Programming Instructors Are Scrambling to Adapt to GenAI](https://www.oreilly.com/radar/emergency-pedagogical-design-how-programming-instructors-are-scrambling-to-adapt-to-genai/)

---

### Emergency Pedagogical Design: How Programming Instructors Are Scrambling to Adapt to GenAI by Sam Lau
*   **The Reality of AI in Education:** Despite generative AI (GenAI) being widely accessible for over three years, **very few programming instructors have made meaningful, structural changes** to their course assignments, assessments, or teaching methods to adapt to it [1].
*   **Emergency Pedagogical Design:** Instructors are currently engaging in "emergency pedagogical design"—a reactionary process constrained by limited resources and an absence of established playbooks, similar to the sudden shift to emergency remote teaching during the COVID-19 pandemic [2]. This practice has four defining properties:
    *   **Reactive:** Instructors are retrofitting legacy courses that were created before GenAI existed [3].
    *   **Indirect:** Since educators cannot modify the interfaces of tools like ChatGPT or Copilot, they must rely on assignments and policies to influence student behavior [3].
    *   **Ambient Evidence:** Pedagogical decisions are driven by informal evidence, such as office-hour interactions, rather than controlled evaluations [3].
    *   **Pressure to Act Now:** Instructors are forced to implement changes immediately without waiting for formalized research or best practices [3].
*   **Five Major Barriers to Adaptation:**
    1.  **Fragmented Buy-In:** While 81% of surveyed instructors are personally open to GenAI, only 28% believe their colleagues share this openness, leaving proactive instructors to work in unsupported isolation [4].
    2.  **Policy Crosswinds:** A lack of top-down guidance has led to a "wild west" of per-course policies. Furthermore, 78% of instructors worry that unequal access to paid GenAI tools will exacerbate disparities in student learning outcomes [4].
    3.  **Implementation Challenges:** Although 80% of instructors believe GenAI integration is important, only 37% frequently use it in course activities, as it is difficult to shape *how* students use the tools effectively [5].
    4.  **Assessment Misfit:** Traditional assessments are failing to measure actual learning. Instructors notice students excelling on GenAI-assisted take-home assignments but failing basic proctored coding exams [6]. Alternate methods, like oral "stand-up" evaluations, introduce massive grading and staffing challenges [6].
    5.  **Lack of Resources and Escalating Inequities:** Resource scarcity is the most significant barrier, with 53% of instructors lacking resources and 62% lacking time [7]. This issue is starkly worse at **Minority-Serving Institutions (MSIs)**, where instructors carry heavier teaching loads, risking a widening of educational inequities if only privileged institutions can afford to adapt [7, 8].
*   **The Path Forward:** The sources suggest that solving this requires collaboration between universities, funders, and researchers to provide faculty training, funding, and evidence-based support so that emergency pedagogical design becomes sustainable for everyone [9].

### Show Your Work: The Case for Radical AI Transparency by Kord Davis and Claude
*   **The Core Argument for Transparency:** Sharing the entirety of your interaction with AI—including the prompts, dead ends, and iterations—rather than just the polished output, builds trust and clearly demonstrates the user's professional judgment [10, 11]. 
*   **The Problem with Hiding the Process:** The natural instinct to clean up AI interactions and hide the process from colleagues is defensive and fundamentally flawed [12, 13]. Hiding the process:
    *   **Erodes Trust:** It leaves colleagues unable to distinguish where human expertise ends and the AI's pattern-matching begins [12].
    *   **Creates "Core Rigidity":** Drawing on Dorothy Leonard's concept of "deep smarts," relying blindly on AI can make a practitioner's own expertise invisible to themselves [14, 15]. When an AI polishes a user's rough idea, the user may mistakenly attribute the insightful formulation to the AI rather than their own initial judgment [15, 16].
*   **AI as a Pattern Matcher:** AI is an extraordinarily sophisticated pattern matcher, not a sentient thinker [17]. It lacks true judgment, context, and understanding of organizational realities [17]. **The more clearly a user views AI as a pattern matcher, the more human judgment they must inject into the process** [18].
*   **Implementing "Radical AI Transparency":** The authors propose treating transparency as a daily cognitive and professional practice, implementing it through four concrete steps:
    1.  **Have the conversation early:** Discuss AI usage and comfort levels with collaborators before starting a project to build psychological safety [19].
    2.  **Track the full threads:** Keep a running, shared document of AI chat logs as they happen, as retroactive compiling is rarely successful [20, 21].
    3.  **Annotate before sharing:** Raw transcripts are hard to parse. Users should add contextual notes explaining why they rejected an AI's draft, changed directions, or overrode the system, as **this annotation is where human judgment becomes visible** [21, 22].
    4.  **Be real about the errors:** Openly acknowledging AI mistakes, hallucinations, and conflations teaches teams about the technology's true nature rather than pretending it is an infallible black box [22, 23].
*   **The Professional Signal:** Showcasing your AI conversations is not a sign of weakness; it proves that you are not outsourcing your expertise [24, 25]. It demonstrates that you know how to use AI as a thinking partner while firmly retaining your role as the arbiter of judgment [25].
*   **Meta-Context:** The article itself acts as a testament to this practice, noting that it was co-authored with the AI Claude and required significant human editorial direction, rejection of multiple drafts, and ongoing corrections [26].