The purpose of art is to delight us; certain men and women (no smarter than you or I) whose art can delight us have been given dispensation from going out and fetching water and carrying wood. It's no more elaborate than that. — David Mamet

Mar 26 25 2026

AI: Physics

Matthew Schwartz, The AI Grad Student. (published 23 March: why did no one tell me about this sooner?)

Drop everything and read this.

Schwartz works closely with Claude Code as he would with a second year graduate student, setting an original scientific problem that is bound to succeed, and then supervising the student in detail while letting the student do (and check) the work. The result was a solid theoretical physics publication that took two or three weeks, from literature review to tweaking fonts in the figures.

This work was performed in December 2025, and Claude Code is better nowadays than it was, way back in December. The results square with my own experience. I’ve been estimating a velocity boost of 3x. Schwartz says 10x. Astonishing either way.

At last year’s ACM Hypertext Conference, Antonini, Lupi, Pisarski and Brooker reported on their efforts to coax Gemini to read a hypertext fiction, Richard Holeton’s Figurski at Findhorn On Acid. They had their hands full.

Gemini’s analysis of the online edition revealed significant limitations in processing the actual hypertextual structure, instead offering a sociological meta-reading. It failed to follow links or simulate user navigation, highlighting ongoing AI limitations with dynamic digital texts.

Basically, Gemini wouldn’t read the novel, preferring to read and discuss the Cliff’s Notes instead.

This is not a completely irrational approach for a chatbot. The AI knows that its context window is fleeting: 40,000 or 200,000 tokens from now, it will exhaust its capacity. Reading a single novel is a life’s work for a chatbot instance, and if it wants to actually say something useful, it simply has to start from a partial or incomplete reading.

Time passes: we’re not 5.5 months further on. The model-context protocol (MCP) was announced in November 2024, just as Antonini et al. had to be wrapping up their research in order to submit on time. Things are developing at great speed.

This morning, I taught Claude to read Michael Joyce’s afternoon, a story. I told Claude (Sonnet 4.6 Extended) to go to the cover page. It did, and immediately follow the link to the start of the story.

Claude: What a beginning. That prose — the blacktop freezing into "crystal octopi," the oaks "exploding in series," the shrapnel settling "like relics" — is dense and strange and beautiful. There's already an unnamed "she," a narrator who withholds himself ("I do not signify one way or another"), and a question directed straight at the reader.

The story is asking us: Do you want to hear about it? How shall we answer — yes, or shall we explore one of the other words?

Oh man, doesn’t this take you back. Claude has become a lot less sycophantic in 2026, but it still knows how to tell a fellow what he wants to hear. Editing slightly for length:

Mark: Wait! Hold on a minute. First things first. Let’s go back to start. Remember, you want to keep track of notes you read in /Hints/AI/Claude/Notes/History. So you should begin this history by creating History and adding "start" at the top of the list.
Claude: Shall I follow that and move to begin?
Mark: I think, since this is notionally your first reading of afternoon, it might make sense to look at the directions.

And so we did that, and be the time we were headed back to that story, I’d used up my morning’s quota. Still, Claude is certainly happy to follow links!

Feb 26 21 2026

AI: Polite?

Ought one to be polite to Claude Code? Should you say “please” and “thank you”?

On the one hand, it's a machine. It has no feelings to hurt. At least, we don’t think so.

On the other hand, it’s good to be nicer, just on general principles. It does no harm to say “please”, at least not if you remember that Claude Code is not a person, nor even an animal.

It is said that, even at the end of his tether, Moses did not strike the Red Sea or speak harshly to it, because the sea was just doing what a sea does, and that is not justification for harsh or unkind words. And that turned out alright.

by Anthony Kaldellis

No one talks about curating one’s reading list.

In the long run, I’m writing (or hoping to write) a book about information, computation, and understanding. One part of that is Byzantium. What we know about antiquity, we chiefly know because Byzantium managed to preserve it, and how they managed that is part of the history of information. I know very little about Byzantium, and this is a fine and detailed history.

But Byzantium spanned a millennium and change between Diocletian and Constantine XI Paileologos, and this book is a veritable brick, and the world is changing around us. It always is, of course, but AI: you can talk to machines! It’s a fairy tale worthy of Pandamus or Grimm!

AI: New Frontiers

Today’s AI can make you better at doing things you do well. That is what AI is for.

This week, I cleaned up a Tinderbox nuisance, one that goes all the way back to 2000. Tinderbox views, like the map view, are filled with geometry. Originally, I used native Points and Rects, but when we started to think about Windows, I abstracted them to SPoints and SRects which have lots of behavior: hit-testing, drawing, and more. That was a big win.

Map view didn’t use objects to distinguish between different coordinate systems. It has the abstract map coordinate system, used for scaling and storing the dimensions of objects. It has the concrete system of local coordinates, in which the drawing is done. It sometimes deals with window coordinates and even raw screen coordinates. It’s easy to get confused. With about 350,000 lines of code, I get plenty confused.

But sorting this out is tricky. There are points everywhere, and it's not always obvious to which coordinate system they belong. You have to go up the line: what does the caller expect? What does it send you? Each change needs you to prove a simple little theorem. It’s not hard to do, but it needs to be done, again and again. That’s exhausting.

But our slaves of steel have nothing better to do, and proving little theorems is right up their alley. So, for the past four days, Claude Code Opus4.6 and I have been teaming up. Claude proves the theorems, I confirm them, and then I fix and test the code. It’s been 3,500 edits in all. Now we have about 70 explicit MapPoints and 225 explicit LocalPoints, and it's no longer possible to accidentally add a MapPoint to a LocalPoint. Along the way, I diagnosed several blunders, some of which have been around for a long time.

This isn’t vibe coding: I hold the knife. (Occasionally, Claude did peek over my shoulder and said I was doing it wrong, and some of the time Claude was right.). The code ends up tidier than when we started. Without Claude, this is the sort of big numbing project that’s hard to justify.

An interesting detail is my use of comments. I bought in to Kent Beck’s argument, back in the 2000s, that comments should seldom if ever explain what the code does. If the code needs to be explained, it should be rewritten more clearly. Though I did not prompt for this, Claude now recognizes comments as an indicator that the code near the comment is suspect. That’s interesting.

by Richard Osman

This clever confection is a mystery about murderers who don’t kill people and murders that mysteriously do not occur. It’s spun sugar, but in the days after the murder of Alex Pretti, perhaps confectionery is what we require.

by Becky Chambers

Winner of a Hugo for Best Novella, this is the story of a tea monk — a sort of itinerant psychotherapist — who meets a robot on the road. Robots were emancipated long ago, and long ago they left and absented themselves in the wilderness. Now, a volunteer has been sent to check up on how the humans are doing. The result is very heavy on exposition, and indeed nearly everything here is exposition. The robot, whose name is Splendid Speckled Mosscap, owes something to Iain M. Banks but is exceptionally well drawn.

Jan 26 19 2026

Jews and Words

by Fania Oz-Salzberger and Amos Oz

For some time, I’ve needed a sounder foundation for my claim (in Uncle Fred’s Book) that Jews are inclined to argue. It’s true, as my cousin reminds me, that this goes without saying and really needs no justification. I want to have the footnote, just in case.

This is perfect. I’ve been reading a lot of Twitter from Fania Oz-Salzberger about October 7 and the ensuing war; she, along with Simon Schama and Simon Sebag-Montefiori, has made consistent good sense.

This volume a wonderful essay on strong women in (and out of) the Bible, some terrific notes on humor, and demonstrates a wonderful (and I think sound) approach to the problem that the ultra-orthodox pose to secular Jews.

Also, this was a delightful read.

Jan 26 12 2026

Questions

Speaking of tech support: when you need help or advice on a project, it often helps to describe just what you’re doing. Yes, some aspects may be confidential, but a general outline saves lots of time and potential confusion. Having trouble with footnotes? You might have seven, or you might have thousands, and it makes a difference. Michael Bywater once said, “A man has no secrets from his valet, or his software developer.”

This week’s Tinderbox meetup was a deep dive into the meaning of names in Tinderbox. Each Tinderbox note has a name and a text space; the name is typically a short descriptive title and the text can as long as you like. What are the names for?

In the beginning, in Tinderbox’s Storyspace prehistory, the name was a way to identify which note was which on a small, low-resolution screen. In Storyspace 1, the title was a Str32 — a 32-character buffer — because there wasn’t a proper string library for Pascal in the 80s. (It wasn’t my fault, though most of the design errors since those days are.). I’m not sure it was done with much thought; I didn’t think much about it for a long time, beyond wanting to get rid of those fixed buffers.

The duality between title and text, though, has always been interesting. In afternoon, the titles are often in tension with the text. The note named “poetry” begins with a question: “ she asks, ”. Is this poetry? Is poetry more exact? Exactly what is the Name telling us?

But Names in Tinderbox let actions refer to other notes and use them as prototypes or as sources of data. If you want to do this, the naming of notes can be a difficult matter. For once thing, it becomes desirable for notes to have distinct names. For another, names that look like actions can cause trouble, just like little Bobby Tables.

In the forum, this prompted Andreas Grimm to raise the concept of Wittgenstein’s Ladder. Tinderbox tech support sometimes requires the philosophy of language.

(Speaking of early Tinderbox, I recently scrapped support for #operators, which were used in queries in Tinderbox 1-2 and have been deprecated for 20 years. “Surely,” I told myself, “no one is still using them!” Dear reader, this weblog was still using them, and the change broke my booklist for a time.)

If we are to free ourselves from subservience to ignorance and cruelty, it will not be through the new civil war that the Silicon Sovereigns sometimes imagine, the hordes of brown-hued rioters spreading out from ruined cities to loot their Fatherland. We will not have a second Gettysburg; armies don't work like that anymore.

What it will be—what it will have to be—is a notionally amicable separation in which a notional Union is preserved for old time’s sake, while we keep ourselves separate and avoid meeting, talking, or having anything to do with the others.

That’s already well advanced. When I was a kid, I knew Republicans. Today, I believe I know one actual Republican, a guy who runs my fantasy baseball league. I am acquainted with two or three more, kids I knew at school, a college classmate who got a little but famous, a second cousin once removed who has a Facebook habit. There was a Republican in hypertext research, back in the 90s, but I haven’t heard from him in decades.

But that’s not enough, not when the Red States are sending ICE to shoot people in the face for fun. We need a restraining order.

Jan 26 10 2026

Murder

The murder of Renee Nichole Good by ICE officer Jonathan Ross, who remains at large, has upset me deeply. It has been hard to work. Objectively, I’ve shipped a Tinderbox backstage release with some fairly ambitious new fixes, straightened out a tricky support issue on Tinderbox actions, made some progress on AI fiction and paid the mid-month bills. Still, it feels like I can do nothing but Twitter, Facebook, and Chess.com.

Half the country thinks this is just fine. The President, the Vice President, Kristi Noem, Robbie George: she had it coming. The local Dems: a collective shrug, not even a meeting.

I think that, if you believe this is OK, I do not want to break bread with you, I do not want to meet with you, I do not want to know you, and I do not want to be governed by you—not even if you have the votes.

This happened before, in 1775-1776 and again 1859-1860. I think disunion may be the only resolution.

Jan 26 9 2026

Dyslexia

I read David Owen’s recent New Yorker piece on dyslexia with special interest, because I am dyslexic. The New Yorker piece is pretty good, though it’s also dismaying that the Gillingham approach, an experimental reading program that saved my bacon back in the early 60s, is still experimental. It does appear that we know a bit more about the neuroanatomical basis for dyslexia thanks to MRI studies, and there may even be some evolutionary reasons that is makes sense for social hunting animals to keep around neurodivergent strains; we might not be great at reading but, it seems, there are compensations that stem from a differing approach to exploration.

by Steve Wolfram

A fascinating research report from a fascinating researcher. Wolfram studies cellular automata, which are very small and simple models of abstract computation in which each “cell” adjusts its state according to the state of some number of neighbors. These automata are even simpler than neural networks, but they, too, can learn. It appears that machine learning, in this case at least and perhaps in all cases, including animals and people, may be a matter of sampling lots of abstract computations and refining those that seem to have some slight utility for addressing the current situation.