I’m heading to Cooperstown this weekend to celebrate the 25th anniversary of a fantasy baseball league I’ve played in for a couple of decades.
What will the Hall of Fame look like in, say, ten years? Sure, Cooperstown is famously a museum run by an accountant and curated by a camel. Still, let’s think about some of the players who, if things continue the way they’re going, will not be playing for the Hall of Fame team in 2023.
- Alex Rodriguez ss
- Pete Rose 2b
- Barry Bonds cf
- Mark McGwire 1b
- Sammy Sosa rf
- Jeff Bagwell 3b
- Manny Rodriguez lf
- Rafael Palmiero dh
- Mike Piazza c
- Roger Clemens sp
- David Ortiz ph
- Joe Jackson ph
Run that lineup out on the field, and you’re going to win a lot of games against anyone. (Bagwell’s out of position but plausible; substitute Caminiti if you must. Manny platoons with Shoeless Joe.)
People who might not be in the Hall of Fame will include players who have arguable cases for:
- Best player, ever
- Most valuable shortstop, ever
- Best pitcher, ever
- All-time leader, career HR
- All-time leader, career hits
- All-time leader, season HR (two of them)
- Best-hitting catcher, ever
And this is pretty much off the top of my head; I’m probably forgetting plenty of people.
Most of this is about drugs. But that’s silly, because everyone today uses performance-enhancing drugs and everyone has access to medical treatment that would have changed everything for players in other times. What might Mickey Mantle have done if his doctors had access to MRIs and arthroscopes? How many careers were lost to torn rotator cuffs before we knew what a rotator cuff was? How about syphilis and tuberculosis? You go to clinic A and get some medicine and everything’s terrific. You go next door and get some medicine and you’re banned for life.
In Clean Code, Robert Martin strongly deplores boolean arguments to constructors. It’s one of the few code smells he identifies that’s not (I think) derived from Fowler’s original Refactoring. At first it seemed outlandish, but I think I’m convinced.
Though Martin doesn’t make a point of the exception, we’re not talking about objects that wrap a boolean state. Things like Alarm might naturally take a boolean:
There’s not that much point in turning a boolean into an object, but you might want this for side effects. A
TestAlarm just remembers if it’s on or off, but a
LowMemoryAlarm could notify objects to clear their caches, and a
FireAlarm could ring the bells, activate sprinklers, and summon the fire department.
We’re not talking about that.
What we’re talking about here are constructor flags that identify arguments or manage behavior. For example, Tinderbox 5 has
UnicodeString::UnicodeString(const string& s,bool isUTF8)
This is a legacy of the movement from MacRoman encoding to UTF8; we pass
true to tell the constructor, “This is already Unicode” and we pass
false to say, “This comes from a legacy source and it’s still using MacRoman text encoding.” The conversion can be expensive and so it’s done just in time; if we get a MacRoman string and we’re only asked for MacRoman, we never need to do the conversion.
There’s a nice, easy refactoring that gets rid of the parameter and clarifies your intent.
- Add a new public constructor without the boolean argument. The new constructor assumes the most common or normal value for the argument, and does exactly what the old constructor did. So now we have
- Define a new class for the unusual case that inherits from your class. It needs only a constructor and a destructor. The constructor simply calls the original class constructor.
LegacyUnicodeString::LegacyUnicodeString)const string& s>
- Make the old two-argument constructor protected.
- Update each place where the original class was constructed, using with UnicodeString or LegacyUnicodeString.
And we’re done. In this case, we can improve eighty-seven separate constructors with one simple refactoring that takes almost no time, while leaving a clear indication that those Legacy strings could bear attention when time allows.
Someone who writes under the name “Porpentine” has published a manifesto on hypertext fiction: Creation Under Capitalism and the Twine Revolution.
I’m thinking, no wonder hypertext fiction had a lull–they hid behind middle-upper class literary pretensions, acting like it was some kind of avant-garde science. I’m seeing academic essays on hypertext buried behind passwords, I’m seeing a hypertext editor like Twine for $300, I’m seeing stories selling for $30. How many people are buying those?
Pseudo-Marxist drivel doesn’t really advance anything and, once you get the hang of it, it’s about as much fun as t-ball. Capitalism may be bad, but it’s not my fault. I’m not sure that the best way to tell a story is to begin by overthrowing Capitalism. If Porpentine has been to a ballpark lately, that $30 price tag looks pretty good.
Porpentine’s world is inconveniently inconsistent with reality.
Raised to believe that a select few create and the rest are just fans. Rich white people create and we suck it up. This is an extremely profitable system.
So they place unfair expectations on what you create. Tell you it’s too short, too ugly, too personal, ask you why it doesn’t resemble what already exists. And the answer is, why would we want it to?
This is a tantrum, and its purported target – rich white folk – is just a ritual invocation, standing in for The Devil, The Critical Establishment, Repressed Desire or whatever force led so many of our ancestors to look askance at that merrily dancing Quaker’s wife. If you make art, people will tell you it’s too short or too long, that it’s ugly, that it’s embarrassing. If you swim, you will get wet.
I mention this because, if you scroll far enough, you’ll get to a section called “WHY TWINE?” which, aside from some residue from the preceding tantrum (“A side-effect of being a minority is exhaustion, loss of time.”) is pretty good.
Howard Rheingold wrote an interesting piece in Tech Review about Engelbart’s Unfinished Revolution. It emphasizes the difficulty that Engelbart experienced in later years in finding support for his vision for using computers to augment human intelligence.
Engelbart’s pitches of linked leaps in technology and organizational behaviors probably sounded as crazy to 1980s corporate managers as augmenting human intellect with machines did in the early 1960s.
Rheingold goes way too far when he asserts that “Engelbart labored for most of his life and career to get anyone to think seriously about his idea.” In my experience, everyone took Engelbart very seriously indeed. I think Engelbart consistently received a thoughtful hearing. This was not always true of Engelbart’s colleague and fellow pioneer, Ted Nelson, who was less well understood by the research community and who rarely got along very well with IT management.
To adopt modern terms, Engelbart wanted (in 1964!) to accelerate the singularity and proposed that we set out to do it right away. This was an intriguing proposition but not a comfortable one. Doug never softened that message and I never heard him address the consequences. If the future is not evenly distributed, what will happen to the people who happen to be less augmented? If augmentation multiplies intellectual abilities, won’t natural disparities and injustices be magnified?
Dave Winer’s response is precise:
I read Howard Rheingold's piece on the Technology Review website, about how the revolution that Doug Engelbart envisioned has gone unfinished. To which I say -- not so. The work continues.
Fargo is tool for augmenting human intellect, as were ThinkTank, Ready, MORE, Frontier and the OPML Editor. As are WordPress, Tumblr and their predecessors. And Twitter.
The web certainly is an intellect-augmenting workspace.
This is absolutely true.
A frustrating fact of the first long generation of personal computing is that we have tended to use our new abilities not to learn more nor to think more, but rather to dress up our work in order to impress managers. We all typeset our memos, dress up our Web sites, and ornament our presentations.
But still, as Winer says, we’re getting there and the work continues.
by Robert C. Martic
An interesting and readable tutorial about current code aesthetics, this book is the software engineer’s Strunk and White.
Again, it’s remarkable how greatly style has changed within the past decade. In the 20th century, you were supposed to write comments. Now, we don’t: if the code is so unclear as to need a comment, it needs to be fixed. Martin does devote an entire chapter to comments, in a quaint gesture toward the elegant weapons of a more civilized age, but it feels out of place.
More interestingly, Martin is an extreme proponent of what I call the tiny methods style. He does not hesitate to make code longer — even substantially longer — in order to break up large methods. A proper method, to Martin, is one you can see at a glance.
Clean Code begins with a pleasant chapter in which the luminaries of the Agile Era propound their vision of clean code. This is interesting and useful and the accompanying portrait sketches are great: for the people I know, the likenesses are terrific. Readers will note the preponderance of beards, and it’s interesting to learn that jUnit originated as a diversion on a plane. The rest of the book’s illustrations are juvenile puns executed by an artist who has not quite mastered the anatomy of the hand, and they add little. The code examples are clean but a few blunders creep into the text. You generally don’t see this from Addison-Wesley or O’Reilly; Prentice-Hall should have caught them.
The key change in software development over the last few years is almost surely the growth of test-driven development. It’s certainly changed the way I work. The central idea here is to build a huge and very detailed test suite alongside the application; the tests define and document how things work and help make sure that things continue to work as the system changes.
It’s essential to run tests all the time, because a test you don’t use is worthless. That means that tests need to be fast. I spent a few hours today polishing some slower tests in Tinderbox Six.
Right now there are 499 tests in all. The slowest test (testAnimation) takes 250msec, and the fastest 120 tests ran in less than a millisecond. In fact, almost everything is pretty fast. The whole suite takes about 5sec. The economics of improving this aren’t great: I invested about three hours to knock a second or two off each test, so break-even is about 10,000 runs unless you account for the annoyance of waiting for the tests to finish.
It looks like we’re up to 100,000 lines of code, ignoring blank lines.
And, yes, after a marathon development weekend, we’re making some interesting progress.
by Antonia Fraser
In 1832, Parliament passed the Reform Bill that was the culmination of decades of bitter struggle and controversy. It was, in essence, a modest reapportionment. The composition of Parliament under William IV was still based on medieval populations. Some villages that had shrunk almost to nothing still sent two members to Parliament, while the huge new industrial cities of the Midlands had no representation at all. The fight to slightly enlarge the electorate to include a slightly broader selection of wealthy men had inflamed passions; observers of all persuasions agreed that Britain stood at the abyss.
Yet, in the end, Reform passed. This lively volume shows how.
What’s striking about the fight of Reform is not that the institutions were sounder than ours or the people better, but that the fear of disaster was shared by both the radicals, who could easily envision a military dictatorship, and by the ultraconservatives, who could readily imagine revolution followed by Terror. Both sides were intransigent but even the fire-eaters could see something to fear on each flank. Reform came because a majority feared insurrection more.
The same has often been true in the US, even at times of great polarization. In the Civil Rights struggle, even solid Southerners like Johnson and Ervin knew that there was something worse than integration: they could see it in Watts and Detroit, and the South had feared it since Toussaint L’Ouverture’s revolt in Haiti. In the Great Depression, even staunch Republicans could imagine something worse than the New Deal: they could see it in the Bonus Army and they could see it in Moscow. The story of the Great Compromises is entirely driven by knowledge that the terrible swift sword was credible and that Judgment would not forever be postponed.
But is that true today? We have credible threats of right-wing violence: the 2000 white-collar riot in Florida, Josh Marshall’s litany of strange right-wing private militias and mercenaries, the whole fever dream of assault-weapon resistance to Big Government or Blue Helmets. But no one really worries anymore about the extreme left -- about revolution. We have no Eugene Debs, no Malcolm X, not even a Henry Wallace; when the right needs to conjure up a Radical for rhetorical effect, the best they can manage seems to be Bill Ayers, a primary education activist! A thriller writer can easily imagine a right-wing takeover, and they do: Seven Days in May, The Handmaid’s Tale. Nobody can see a plausible left-wing takeover; when Hollywood needs one, it conjures up things like Chinese Fleets or Space Aliens.
The Right feels free to be irresponsible because the worst possible government is the government they already have. Any left-wing president is always going to be looking at a largely right-wing judiciary and a largely right-wing military; they can easily imagine a very bad government and have good reason to take steps to prevent it. But, if the right truly believes this is the worst of all possible governments, they have no need to be reasonable because their unreasoning cannot be punished.
(An earlier version of this note appeared here).
Because I was a delegate, I missed a day of Readercon for the MA Democratic Convention. I’m busy and I’m coming down with a cold. It’s hot. I’m grumpy.
But I wasn’t completely happy with Readercon.
It’s still a terrific conference about reading, about thinking about the kinds of books that lots of people don’t consider worth thinking about. John Clute was spectacular. Greer Gilman, a retired forensic librarian, was terrific. (Forensic librarian? Who knew that was even a thing!) Yoon Ha Lee did a nice introduction to StoryNexus and the problems of interactive writing, though we’ve been doing this for twenty years and more now and we still can’t get past the basics and sit down and talk about craft, or talk seriously about what sort of environment we’d like to create rather than the oddities of the systems handed to us.
There were great panels and it’s good to see Readercon people.
But “The Year In Novels” was gone. So was “The Year In Short Fiction.” So was “Bookaholics Anonymous.” In their place, it seems, we have a bunch of panels about “safety at conventions.” It’s a loss.
Several of the panels this year seemed to be in the business of scolding books for lapses and insensitivities, particularly with regard to race and to sexual preference. It’s good to look at these, but at some point we also to treat books with sympathy and understanding.
This year at Readercon, we did not speak so much about certain things. This year at Readercon, we did not speak so much.
I came home and grabbed the new issue of the Believer in order to read Nick Hornby’s list of “What I’m Reading.” I should not be coming home from Readercon looking for ideas of books to read.
by China Miéville
Before the humans came, we did not speak so much of certain things. Before the humans came, we did not speak so much.
These are the crucial lines of Embassytown, and it is representative of Miéville’s accomplishment that such an enigmatic and unlyrical statement should come to mean so much.
Embassytown addresses an old challenge of science fiction: how can we talk about (and with) people who don’t think like us? The Arieki speak a language the requires two voices. For Arieki, thinking and speaking are inseparable: they literally say what they think. (This wouldn't work for humans, because humans think many things at once; the Arieki have a unicameral mind.) Arieki cannot say things they do not believe. This has obvious evolutionary advantages; if Joe says, “there are tasty sheep over the hill,” you know that Joe is telling you what he believes and not luring you into a trap. Disadvantages are obvious as well; Arieki cannot reason counterfactually. The simile is the latest in Arieki technology, and Arieki construct monuments or broadcast staged scenes in order to create new similes with which to think.
In fact, our narrator is a simile: she is “the girl who ate what was given to her.”
It’s a complex world, executed with Miéville’s usual flair, and also with his customary reliance on places more interesting than people.
by Matt Galloway
This book is organized and presented as an Objective C homage to Scott Meyers’ classic work, Effective C++. The new book is good and interesting and provides useful tips for working in Objective C, but it’s far less interesting than Meyers.
What made Meyers’ book both necessary and important was the subtlety and indirection of modern C++. Over the years, C++ gradually developed a family of practices and customs that are not obvious, even to experienced programmers. In particular, const-correctness and exception safety are pervasive and essential to everyday C++, they’re fundamentally different from other languages, they’re easy to get wrong, and once you do get stuff wrong in your code, the blight spreads rapidly. C++ style requires that, if class A publicly inherits from B, then B is a kind of A; this convention contradicts everyday usage in other familiar languages. Some facilities — run-time introspection, ubiquitous type conversion — turn out to be good things to avoid even though they seem inviting, and others, such as preferring C++ streams to C’s
printf, are worth adopting despite their initial learning curve.
There’s some of this in Objective C as well, but the focus in Effective Objective C 2.0 rests on syntax and semantics. Much of this is good and almost all is completely sound, but I think these tips miss many real issues in writing modern Objective C effectively. Some of these include:
- Key-value observing, its use, and its discontents
- The contested morality of adding new categories to standard objects
- The contested utility of introspection
- Pervasive use of task queues for managing concurrency
- Block idioms as a replacement for C++ stack-based resource management
- Design conflict between using delegates , requesting notification, and passing blocks
- Design conflict between using one broad protocol or several narrow protocols
- Desirability or otherwise of macros, especially in unit testing
Of greatest concern, I think, is the sparse discussion of concurrency. Sure, NSOperation and Grand Central Dispatch are great. But we have thousands of programmers now, all launching concurrent tasks left and right, who never studied concurrency formally and who don’t know a Petri net from a Petri dish. They need constructs and guidelines to keep them from stumbling into thickets of confusion when they’re just trying to get stuff done.
Outliner pioneer Dave Winer writes an evocative lament: Engelbart was sidelined.
I seldom comment on weblogs, because comments are harmful. But this seemed to be an exceptional occasion. And because I found myself taking some care with the response, I’m editing and copying it here.
Winer writes that
I would have welcomed Engelbart if he had chosen to become a user of ThinkTank, MORE or Frontier -- and we would have implemented the features he needed to make augmentation of intellect conform to his view of how things should work.
Winer’s ThinkTank and MORE realized and popularized many of the ideas in Engelbart’s NLS/Augment. I don't believe I ever actually discussed this with Engelbart, but I think this is widely understood. Winer’s outliners advanced beyond what Engelbart’s could do along different axes: Engelbart had collaboration, but he also had interaction based on magic key sequences and mysterious chords that seemed like good ideas at the time.
What we need to keep in mind is that NLS ran out of PDP-10's just before we learned how to do emulation and virtualization properly. We know how to do that now, so the trauma of being stranded on an unsupportable platform is far, far less serious today.
There's plenty of NLS in Frontier. There’s a lot of Frontier in Tinderbox.
Smart people know the literature and scour it for great ideas. That's one reason we talk so much about The Demo: Engelbart was unusually bad at writing papers, so an unusually large part of his legacy rests on having seen and used the software. That’s true for Winer as well, in a way: the legacy is in the software. But there are thousands and thousands of copies of Frontier, there are articles and books, and Dave himself has written a ton and created an entire school of blogging. You could download Frontier and everyone did: to see NLS, you had to drive out to Menlo Park.
Winer hoped Frontier would see him through. It yet may. And there’s another thing to keep in mind: we're better programmers than we were back then. I'm currently in the midst of a complete rewrite of Tinderbox -- ten years of development recapitulated in months. New platform, new framework, new language: it's remarkable how much better the new code is. Part of that’s because computers are faster and IDEs are better, but part is because we've learned a ton about programming. I’ve added features in an afternoon that I spent a week trying (and failing) to implement only a year or two ago.
Things that once seemed inconceivably difficult to redo are now perfectly feasible, and things that used to be really hard to get right now work right out of the box. We’re still learning how to program, all of us, and we’re doing it better.
Douglas Engelbart, a great computer pioneer, has died. He was 88.
Engelbart’s famous 1968 demonstration of his pioneering hypertext system introduced the mouse, graphic display, outliner, and collaborative computing. For years, people have referred to it simply as, “The Demo.”
Doug was patient, kind, and quiet. Though in his own work his was very technical, he was always drawn to literary hypertext and, in the early years, sometimes pulled me aside to ask thoughtful questions about my own work and about papers by Moulthrop & Kaplan or Joyce & Bolter. These questions were always private and for his own instruction; he seldom or never joined conference Q&A sessions. His early papers on computer-supported collaboration are milestones, but Doug was never a facile writer and, considering his enormous influence, his body of papers is remarkably slender. Though teaching was not part of his job, he trained generations of scientists and engineers who have gone on to define computer science and to create the computing ecology we know today.
Doug keynoted the first hypertext conference in 1987, where he received a standing ovation, and was for years a frequent attendee. His NLS/Augment system was the direct inspiration for many of the early hypertext tools, and his courageous embrace of the goal of augmenting human intellect inspirited a community eager to use knowledge and technology to improve the word.
This lavishly-produced, big-budget game is reasonable fun. The writing is sometimes not bad, and the art direction is superb and, if you ignore some silly gore and toilet humor, reasonably sophisticated. At times, the Borderlands world shows glimpses of history and depth — not nearly enough to bear the weight of the game, but it’s better than I’ve come to expect.
Most of the fun is tactical — dodging and vanquishing lots of bad guys. These can be both interesting and challenging. The computer’s tactics are much better, for example, than they were in City of Heroes. Unfortunately, nobody on the development team seems to have heard of flanking fire, so patience and keeping your head down are recipes for success.
There’s lots of good voice acting. Unfortunately, the developers don’t understand that we don’t need to hear the same taunts a thousand times. Why not diminish the frequency of these interjections gradually over time? The same goes for local color: the car vendor Scooter is amusing the first dozen times, but after that he’s just a boring vending machine. (The “stores” are actually rendered as old-fashioned coke machines that dispense game gear, a clever bit of self-deprecation.)
The game offers thousands of guns. OK: it’s about shopping. But there are too many varieties, all pretty much alike. Enough is too much. I’m a pretty analytic fellow, but I’ve given up on the stats: I pick up a gun, try it, and if it doesn’t impress me right away I sell the thing.
The developers have marshaled a huge array of narrative resources — corrupt corporations, ecological disaster, frontier violence, alien archaeology — but manage to extract almost no plot. There are hints of depth here — Russian gangsters in ante-bellum Texas? Evil industrial cartels? — but few seem to be pursued and everything tends to peter out. In the end, it’s just another boss battle.