There's obviously different kinds of programming. Without people who are not like me none of this would exist. But I've always seen much more in common with writing prose than math. It feels like you're writing a story and you're trying to express a concept to a very dumb person-the computer—who has a limited vocabulary. You've got this concept you want to express and limited tools to express it with. What words do you use and what does your I thing.
Seibel: What do you think is the most important skill for a programmer to have?
Fitzpatrick: Thinking like a scientist; changing one thing at a time. Patience and trying to understand the root cause of things. Especially when you're debugging something or designing something that's not quite working. I've seen young programmers say, "Oh, shit, it doesn't work," and then rewrite it all. Stop. Try to figure out what's going on. Learn how to write things incrementally so that at each stage you could verify it.
One of the things I've been pushing is reading. I think that is the most useful thing that a community of programmers can do for each other—spend time on a regular basis reading each other's code. Then are's a tendency in project management just to let the programmers go off independently and then we have the big merge and if it builds then we ship it and we're done and we forget about it.
think the lack of reusability comes in object-oriented mguages, not in functional languages. Because the problem with object-oriented languages is they've got all this implicit environment that they carry around with them. Vou i wanted a banana but what you got was a gorilla holding the banana and the entire jungle.
If you have referentially transparent code, if you have pure iinctions—ail the data comes in its input arguments and everything goes out and leaves no state behind—it's incredibly reusable. You can just reuse it here, there, and everywhere. When you want to use it in a different project, you just cut and paste this code into your new project.
If you give two programmers the same problem—it depends on the problem, but problems of a more mathematical nature, they can often end up writing the same code. Subject to just formatting issues and relabeling the variables and the function names, it's isomorphic—it's exactly the same algorithms. Are we creating these things or are we just pulling the cobwebs off?
John Washbrook, who was himself a senior academic in the department, took me under his wing and he told me something that very important. He said, "Just start something, no matter how humble." This is not recall; about programming, this is about research. But no matter how humble and unoriginal and unimportant it may seem, start something and write a paper ab about it So that's what I did. it turned out to be a very significant piece of advice.
I've told that to every research student I've ever had since. Because it's how you get started. Once you start the mill turning then computer science is very fractal—almost everything turns out to be interesting, because the subject grows ahead of you. It's not like a fixed thing that's there that you've got to discover. It just expands.
And think about failure modes—I remember one of the great lessons I got about programming was when I showed up at the airport at Heathrow, and there was a power failure and none of the computers were working. But my plane was on time.
Somehow they had gotten print-outs of all the flights. 1 don't know where—there must have been some computer off-site, i don't know whether they printed them that morning or if they had a procedure of always printing them the night before and sending them over and every day when there is power they just throw them out But somehow they were there and the people at the gates had a procedure for using the paper backup rather than using die computer system.
I thought that was a great lesson in software design. 1 think most programmers don't think about, "How well does my program work when there's no power?"
You look at Knuth's original Literate Programming, and he's really trying to say, "What's the best order for writing a book," assuming that someone's going to read the whole book and he wants it to be in a logical order. People don't do that anymore. They don't want to read a book, they want an index so they can say, "What's die least amount of this book that I have to read? I just want to find the three paragraphs that I need. Show me that and then I'll move on." I think that's a real change.
I guess to me the biggest change is that nowadays you can't possibly know everything that's going on in the computer. There are things that are absolutely out of your control because it's impossible to know everything about all the software. Back in the '7Os a computer had only 4,000 words of memory. It was possible to do a core dump and inspect every word to see if it was what you expected. It was reasonable to read the source listings of tine operating system and see how that worked. And I did that—I studied the disk routines and the card-reader routines and wrote variants of my own. I felt as if I understood how die entire IBM 1130 worked. Or at least as much as I cared to know. You just can't do that anymore.
I think it's not an accident that we often use the imagery of magic to describe programming. We speak of computing Wizards and we think of things happening by magic or automagically. And I think that's because being able to get a machine to do what you want is the closest thing we've got in technology to adolescent wish-fulfillment.
And if you look at the fairy tales, people want to be able to just think in their minds what they want, wave their hands, and it happens. And of course the fairy tales are full of cautionary tales where you forgot to cover the edge case and then something bad happens.
If I could change one thing—this is going to sound stupid—but if I could go back in time and change one thing, I might try to interest some early preliterate people in not ! using their thumbs when they count It could have been the Standard, and it would have made a whole lot of things easier in the modem era. On the other hand, we have learned a lot from the struggle with the incompatibility of base-ten with powers of two.
Seibel: You mention four disciplines: music, graphics, mathematics, and text those are about as old as humanity. Clearly there are powerful ideas there that are independent of computers—the computer just provides a way to explore them that might be hard without the computer. Is there also a set of interesting, powerful ideas inherent in the computer? Is programming or computer science another deep discipline—a fifth area we can only do we have computers?
Ingalls: Yes, I think that's what I am getting at The curriculum I've always envisioned is one in which you start with one of these and maybe from the motivation of going deep In one of those areas, you move over to one of the other ones that's less familiar and do a similar thing there. And a lesson to be learned is that the way in which you get to those simpler, deeper structures that generate that whole field is similar in every case.
There's an algebra of graphics. It's primitive objects, superposition translation, rotation. Dr music. It's notes and temporal sequences and chords—same thing. And I think this goes "back to seeing how the wind works and how the planets move. It's an invitation to go down and find out how things work and learn in the things that make up the algebra—the processes and the primitive things. So yes, that fifth area, as you call it is just what's common about all of these things.
Often, reading about famous people, the side of it that I'm interested in is, how do they make their life work? All the things that weren't their passion, and how did they deal with that and with their family, and with their finances, and balancing that Or did they just hole up and say, "To hell with everything else," and let it just come crumbling down until they had their work done?
You know the old story about the telephone and the telephone operators? The story is. sometime fairly early in the adoption of the telephone, when it was clear that use of the telephone was just expanding at an incredible rate, more and more people were having to be hired to work as operators because we didn't have dial telephones. Someone extrapolated the growth rate and said, "My God. By 20 or years from now, every single person will have to be a telephone operator." Well, that's what happened. I think something like that may be happening in some big areas of programming, as well.
One of the things that I've been thinking about off and on over the last five-plus years is, "Why is programming so hard?"
You have the algorithmic side of programming and that's i close enough to mathematics that you can use mathematics as the basic model, if you will, for what goes on in it. You can use mathematical methods and mathematical ways of thinking.That doesn't make it easy, but nobody thinks mathematics is easy. So there's a pretty good match between the material you're working with and our understanding of that material and our understanding of the skill level that's re[required to work with it
I think part of the problem with the other kind of programming is that the world of basically all programming languages that we have is so different in such deep ways from the physical world that our senses and our brains and our society have coevolved to deal with, that it is loony to expect people to do well with it. There has to something a little wrong with you for you to be a really good programmer. Maybe "wrong with you" is a little too strong, I but the qualities that make somebody a well-functioning human being and the qualities that make somebody a really good programmer—they overlap but they don't overlap a whole heck of a lot And I'm speaking as someone who was a very good programmer.
The world of von Neumann computation and Algol-family languages has such different requirements than the physical world, that to me it's actually quite surprising that we manage to build large systems a[[at all that work even as poorly as they do.
Perhaps it shouldn't be any more surprising than the fact that we can build jet airliners, but jet airliners are working in the physical world and we have thousands of years of mechanical engineering to draw on. For software, we have this weird world with these weird, really bizarre fundamental properties, the physical world's properties are rooted in subatomic physics, but you've got these layers: you've got subatomic physics, you've got atomic physics, you've got chemistry. You've got tons of emergent properties that come out of that and we have all of this apparatus for functioning well in that world.
I don't look around and see anything that looks like an address or a pointer. We have objects; we don't have these weird things that computer scientists misname "objects."
I have a little bit of a rant about computer science also. I could make a pretty strong case that the word science should not be applied to computing. I think essentially all of what's called computer science is some combination of engineering and applied mathematics. I think very little of it is science in terms oft of the scientific process, is, where what you're doing is developing better descriptions of observed phenomena.
The problem being the old saying in the business: "fast, cheap, good—pick any two." If you build things fast and I you have some way of building them inexpensively, it's very unlikely that they're going to be good. But this s|s school of thought says you shouldn't expect software to last.
I think behind this perhaps is a mindset of software as expense vs. software as capital asset. I'm very much in the software-as-capital-asset school. When I was working at ParcPlace and Adele Goldberg was out there evangelizing abject-oriented design, part of the way we talked about objects and part of the way we advocated object-oriented design to our customers and potential customers is to say, "Look, you should treat software as a capital asset"
And there is no such thing as a capital asset that doesn't require ongoing maintenance and investment You sh([should expect that there's going to b» be sol some cost at associated with maintaining a growing library of reusable software. And that is going to complicate your accounting because i means you can't charge the cost of building a piece of software only to the project or the customer that's motivating the it the way you would think of a capital asset.
Suppose someone describing something to me from postulates like. "Here's a computer and here are the op codes." I can visualize the structure of programs and how things are efficient or inefficient based on those op codes, by seeing the bottom and imagining the hierarchy. And i can see the same thing with programs. If someone shows me library routines or basic bottom-level things, I can see how you can build that into different programs and what's missing—the kinds of programs that would still be I hard to write. So I can envision that pyramid, and the problem is to try and decompose it and get get the bottom pieces.
Modern programming scares me in many respects, where they will just build layer after layer after layer that does nothing except translate. It confuses me to read a program which you must read top-down, It says "do something." I go find "something." And you read it and it says, "do something else" and you go find something else" and it goes back to the top maybe. And nothing gets done. It's just relegating to a deeper and deeper level. I can't keep it in my mind--I can't understand it.
Seibel: Do you think of yourself as a scientist, an engineer, an artist, or a draftsman?
Allen: I think of myself as a computer scientist I was involved in my corner of the field in helping it develop. And those were interesting times—the emergence of computer science—because there was a Ic lot of question about, "Is this a science? Anything that has to have science in its name n't a science." And it was certainly unclear to me what it meant.
But compilers were a very old field—older than operating systems. Some day want I to really look it up. The word compiler comes actually from the embedding of little snippets of instructions to execute. Like an add would be spelled out in very primitive terms for the machine. If you want to do an add, then it would go to its library that defined that and expand it.
But assemblers were also using symbolics. m not sure this is accurate, but I used to believe that the first early use of symbolics for names of variables came from a man named Nat Rochester, on a very early IBM machine, the 701 around 1951. He was in charge of testing it and they wrote programs to test the machine. In the process of doing that, they introduced symbolic variables. Now, I've seen some other things since that make me believe that there were earlier ways of representing information symbolically. It emerged in the early '50s, I think, or maybe even in the '40s. he would have to go back and see exactly how things were expressed in the ENIAC, for one thing.
Seibel: So somewhere along the line, you realized you had become a computer scientist, developing theories about compiler optimization and so forth. But you started out as a programmer, hired to write code. By the time of the PTRAN project you were managing a team of people who were actually writing the software. Why did you make that switch?
Allen: Well, probably two reasons—one, I wasn't a very good programmer, I tended to make quite a few mistakes—unlike the conventional wisdom at the time that said that women make good programmers because they pay attention to details. I didn't fit that category. So I tended to be kind of disinterested in getting all the details right and I was much more interested in the way systems work.
My interest in mathematics was very abstract. If I had had enough money to go 1 to get a PhD, I would have become a geometer. I loved the rigor of that process. That's what I really most enjoy, puzzling through systems—puzzling through the engineering kinds of things without necessarily knowing the details of what one would need to know to be an engineer, which is quite a different area.
Java didn't feel right. My old reflexes hit me. Java struck me as too authoritarian. Thats one of the reasons why I mentioned that Perl felt so good, because it's got the safety and the checks but it is so damn multidimensioned that the artist part of me has a lot of free board to express things early and to think about the right way do things. I have some freedom.
When I first messed with Java—^this was /vhen it was little baby language, of course—I said, "Oh, this is just another one of those languages to help not-so-good programmers go down the straight and narrow by restricting what they can do." But maybe we've come to a point where Sat's the right thing. Maybe the world has gotten so dangerous you can't have a good, flexible language that one percent or two percent of the programmers will use to [make great art because the world is now populated with 75 million run-of-the-mill programmers building these incredibly Complicated applications and they need more help than that So maybe Java's the right thing. I don't know.
At one level I'm thinking, "This is way cool hat you can do that." The other level, the I programmer in me Is saying, "Jesus, I'm glad that this wasn't around when I was a programmer." I could never have written all this code to do this stuff. How do these guys do that? There must be a generation of jrogrammers way better than what I was ^hen I was a programmer. I'm glad 1 can jhave a little bit of repute as having once been a good programmer without having to actually demonstrate it anymore, because I don't think I i could.
In other words, there's still so much more beyond any five pages of my book that you can make a lifetime's worth of study, because there's just that much In computer science. Computer science doesn't all boil down to a bunch of simple things. If it turned out that computer science was very simple, that all you needed to do was find the right 50 things and then learn them really well, then I would say, "OK, everybody in the world should know those 50 things and know them thoroughly."
But It isn't that way. I've got thousands of pages and exercises, and I write it down and put it in the book so that I don't have to have it all in my head. I have to come back to It and learn it again. And I have the answers to the exercises because I know that ten years from now I won't remember I how to do the darn thing and it will take !me a long time to reconstruct it. So I give myself at least the clues to how to reconstruct stuff.
Seibel: Do you feel like programmers and computer scientists are aware enough of the history of our field? It is, after all, a pretty short history.
Knuth: There aren't too many that are scholars. Even when I started writing my books in 1963, I didn't think people knew what had happened In 1959. I was reading in American Scientist last week about people who had rediscovered an algorithm that Boyer and Moore had discovered in 1980. Ii happens all the time that people don't realize the glorious history that we have. The idea that people knew a thing or two in the '70s Is strange to a lot of young programmers.
It's inevitable that in such a complicated field that people will be missing stuff. Hopefully with things like Wikipedia, achievements don't get forgotten the way they were before. But I wish I could also instill in more people the love that I have for reading original sources. Not just knowing that so-and-so gets credit for doing something, but looking back and seeing what that person said in his own words. I think it's a tremendous way to improve your own skills.
It's very Important to bee able to get inside of somebody else's way of thinking, to decode their vocabulary, their notation. If way they thought and the way they made a own discoveries. I often read source I materials of what brilliant people have said about this stuff in the past. It'll be expressed in unusual ways by today's convention it's worth it to me to penetrate their notation and to try to get into I their idea.
For example I spent a good deal of time at Babylonian manuscripts of how they described algorithms 4,000 years ago, and what did they think about? Did they have while loops and stuff like this? How would they describe it? And to me this is very worthwhile for understanding about how the brain works, but also about how they discovered things.
A couple of years ago I found an old Sanskrit document from the 13th century that was about combinatorial math. Almost nobody the author knew would have had the foggiest idea what he was talking about. But I found a translation of this document land it was speaking to me. I had done similar kinds of thinking when I was beginning in computer programming.
And so to me reading source materials is great enrichment for my own life and creativity.
He says things like, "Do good stuff." He says, "If you don't do good stuff, in good areas, it doesn't matter what you do." And Hamming said, "I always spend a day a week learning new stuff. That means I spend 20 percent more of my time than my colleagues learning new stuff. Now 20 percent at compound interest means that after four and a half years I will know twice as much as them. And because of compound interest, this 20 percent extra, one day a week, after five years I will know three times as much," or whatever the figures are. And I think that's very true. Because I do research I don't spend 20 percent of my time thinking about new stuff, I spend 40 percent of my time thinking about new stuff. And I've done it for 30 years. So I've noticed that I know a lot of stuff. When I get pulled in as a troubleshooter, boom, do it that way, do it that way. You were asking earlier what should one do to become a better programmer? Spend 20 percent of your time learning stuff—because it's compounded. Read Hamming's paper.