I’ve been using Claude Code on a side project of mine, and have been both impressed and wary of what it can do.

  • It’s able to take direction very well, and can “fill in the blanks” well enough much of the time (but will lie to you about what’s in the blanks).
  • It will still leave blanks in and not tell you, non-deterministically1; it behooves You The Human to review everything it does.
    • The corollary to this is that You The Human need to be able to judge whether the LLM is confidently lying to you (or even unintentionally bullshitting you). If you can’t tell, then you run the risk of making decisions based on literally-meaningless associations.
  • “The Human in the Loop”, January 18 2026
    • junior dev / rote work is now automatable
    • higher-order review & direction is still key
    • “My worry isn’t that software development is dying. It’s that we’ll build a culture where ‘I didn’t review it, the AI wrote it’ becomes an acceptable excuse.”
  • HN guideline reiteration “Don’t post generated/AI-edited comments”, March 11 2026
  • Addy Osmani “Comprehension Debt”, March 14 2026
  • petition to disallow AI-assisted PRs in NodeJS, fallout from Matteo opening a huge PR in NodeJS, March 18 2026
  • “Thoughts on slowing the fuck down”, March 25 2026
    • “The point is: let the agent do the boring stuff, the stuff that won’t teach you anything new, or try out different things you’d otherwise not have time for. Then you evaluate what it came up with, take the ideas that are actually reasonable and correct, and finalize the implementation. Yes, sure, you can also use an agent for that final step. And I would like to suggest that slowing the fuck down is the way to go. Give yourself time to think about what you’re actually building and why. Give yourself an opportunity to say, fuck no, we don’t need this.”
  • “606: AI layoffs are BS” — Giant Robots Smashing Into Other Giant Robots podcast, March 26 2026
  • “The machines are fine. I’m worried about us”, March 30 2026
    • “Making the models smarter doesn’t solve the problem. It makes the problem harder to see.”
    • “The real threat is a slow, comfortable drift toward not understanding what you’re doing.”
    • “Frank Herbert (yeah, I know I’m a nerd), in God Emperor of Dune, has a character observe: ‘What do such machines really do? They increase the number of things we can do without thinking. Things we do without thinking; there’s the real danger.’ Herbert was writing science fiction. I’m writing about my office. The distance between those two things has gotten uncomfortably small.”
    • on ‘grunt work’: “The failures are the curriculum. The error messages are the syllabus. Every hour you spend confused is an hour you spend building the infrastructure inside your own head that will eventually let you do original work. There is no shortcut through that process that doesn’t leave you diminished on the other side.”
    • “The problem isn’t that we’ll decide to stop thinking. The problem is that we’ll barely notice when we do.”
  • “The Future of Everything Is Lies, I Guess (part 8): Work”, April 14 2026
    • “Software development may become (at least in some aspects) more like witchcraft than engineering. The present enthusiasm for “AI coworkers” is preposterous.”
    • “One of [Lisanne Bainbridge’s] key lessons is that automation tends to de-skill operators. When humans do not practice a skill—either physical or mental—their ability to execute that skill degrades. […] My peers in software engineering report feeling less able to write code themselves after having worked with code-generation models, and one designer friend says he feels less able to do creative work after offloading some to ML.”
  1. …which is somewhat obvious, but also worth reiterating until it is Common Knowledge imho… 

Updated: