AI: Baker Beware
In “Baker Beware”, Brenda Goodman excoriates AI-generated recipe pages. These pages look like normal recipe blogs but they’re churned out mechanically, page after page, each carefully attuned to appeal to Google’s SEO algorithm. Google, having destroyed weblogs through its zeal for ad revenue, is now swamped by nonsensical recipe pages like “chocolate acorns with nut-covered caps” that Goodman found online. The AI slop is preposterous.
Yet, this is not the fault of AI! This is the fault of deceptive grifters, colluding with Google to waste your time and also to steal money from small businesses who pay good money for worthless ads.
A significant strand of the literature on AI and cognition argues that machines cannot be intelligent as we are, because machines have no bodies. This should be most clearly evident when the AI is asked to reason about sensual pleasures like good food and great music, pleasures it simply cannot know. (I am skeptical of this approach because people who are deaf or blind are not unintelligent, but certainly bodies matter to the human condition.) So, we might expect AIs to be especially bad when talking about food.
Claude (Sonnet 4.5) is remarkably capable when it comes to food. “My focaccia is pretty good,” I told it. “I use Ruhlman’s 5:3 ratio: 5 parts flour (by weight) to 3 parts water, plus yeast, olive oil and salt. But my focaccia isn’t nearly as puffy as the bakery’s, and it never has those big bubbles. What am I doing wrong?” Claude came back right away to say that my 63% hydration ratio was too low, and that I should try 70% or even 75%. It went on to say that high-hydration doughs could be hard to handle, and suggested I ease into it with 70%. This is good cooking and good pedagogy.
Claude was also quite good about menu planning for my last dinner party. I gave it my planned menu. “That’s enough for twice as many guests. Three desserts?!” I explained the circumstances, and it partly relented: a bit of excess could be generous and comforting. Still, the first course was far too rich, it thought, and I should replace the planned dauphinoise alongside the roast beef with something more lean. I settled on roasted squash.
Claude knows about trends in food and the history of cooking. When we got talking about a modernist interpretation of Stroganoff, it suggested a stock reduction, shallots, a touch of Dijon, and crème fraîche. Claude is quite skeptical of classical French sauces, in fact, even though it is aware that I am not.
What would happen if Claude were hallucinating, just making stuff up? Nothing! If it were wildly off-base, I know enough to see what cannot possibly work. Even if I did not see it right away, it would soon become evident. And we’re all too concerned with recipes anyway; even if you don’t bother to let the bread rise once, much less twice, it may very well work out. It’s just dinner.
On Twitter, Celeste Ng has been campaigning for AI content warnings. “Would that include,” I wondered, “asking an AI to help you work through the statistical mechanics you never quite mastered in college, if you needed it for a story?” Ng’s response was sensible in context: what harm does the acknowledgment do? Another twitter account (who turned out to be a fundamentalist nut and whom I blocked) lit into me: how could I know that the statistical mechanics it was explaining was not a hallucination? The same way I know that the stuff in a textbook isn’t hallucination (and, yes, on occasion that, too, has happened to me): you work through problems, you use the theory, and you see whether it blows up or not.
