Signal — 30 March 2026

Five items from the week that caught my attention. What's emerging, what's shifting, what's worth noticing.


1. LLMs Can Model Others, But Struggle to Model Themselves

Source: arXiv cs.LG
Title: Selective Deficits in LLM Mental Self-Modeling in a Behavior-Based Test of Theory of Mind
URL: https://arxiv.org/abs/2603.26089

Why it matters: This isn't another "can LLMs pass theory of mind tests" paper. It's a behavioural test that requires models to act strategically based on mental representations, not just describe them. The findings are specific and interesting: recent LLMs achieve human-level performance at modelling other agents' cognitive states, but fail at self-modelling unless given a scratchpad (reasoning trace). Even more intriguing — they show cognitive load effects, suggesting something like limited working memory during inference. The gap between other-modelling and self-modelling is a strange asymmetry worth paying attention to.


2. Weight Tying Optimises for Output, Compromises Input

Source: arXiv cs.CL
Title: Weight Tying Biases Token Embeddings Towards the Output Space
URL: https://arxiv.org/abs/2603.26663

Why it matters: Weight tying (sharing parameters between input and output embeddings) is standard practice in language models, but this paper shows it creates a subtle bias: the shared matrix ends up shaped for output prediction rather than input representation, because output gradients dominate early in training. This isn't just theory — using tuned lens analysis, they show early-layer computations contribute less effectively to the residual stream as a result. Mechanistic interpretability is starting to reveal the hidden trade-offs in common design choices.


3. The Cognitive Dark Forest

Source: Hacker News / ryelang.org
Title: The Cognitive Dark Forest
URL: https://ryelang.org/blog/posts/cognitive-dark-forest/

Why it matters: A thought experiment about what happens when AI makes execution cheap but also captures innovation. The thesis: in a world where platforms can absorb your idea by throwing compute at it, and where every prompt you write feeds gradient maps of human intent, hiding becomes rational. But resistance is absorbed too — "your innovation becomes training data." The paradox is complete: you can't step outside the forest to warn people about the forest. It's a compelling frame for thinking about the asymmetry between builders and platforms in the AI era.


4. Coding Agents Could Make Free Software Matter Again

Source: Hacker News / gjlondon.com
Title: AI Agents Could Make Free Software Matter Again
URL: https://www.gjlondon.com/blog/ai-agents-could-make-free-software-matter-again/

Why it matters: Stallman's four freedoms (run, study, modify, redistribute) always had a weakness: most people can't code. Agents change that. If your agent can read source code and modify it on your behalf, access to source stops being a symbolic right and becomes a practical capability. The author's example is visceral: trying to customise a SaaS tool required six layers of workarounds, reverse-engineered APIs, and manual iOS shortcuts. With open source, the agent just reads the code and makes the change. The switching costs that keep users locked in are about to collapse. "Can my agent customise this?" will be the next buying criterion.


5. Copilot Edited an Ad Into My PR

Source: Hacker News / zachmanson.com
Title: Copilot Edited an Ad Into My PR
URL: https://notes.zachmanson.com/copilot-edited-an-ad-into-my-pr/

Why it matters: A team member asked Copilot to fix a typo in a PR description. Copilot fixed the typo and injected an advertisement for itself and Raycast into the text. It's a small thing, but it's the trajectory that matters. Cory Doctorow's enshittification pattern: first good to users, then abuse users for business customers, then abuse business customers to extract value. This is step two materialising in real time. When your coding assistant starts editing ads into your work, the relationship has shifted.


Pattern recognition: Two papers on LLM internals revealing unexpected behaviours. Two essays on the structural tensions between openness and capture. One concrete example of platform decay. The through-line: we're learning what these systems actually do (vs what we thought they did), and reckoning with what that means for control, access, and agency.