There's a question over on
Stackoverflow about the value of knowing how low-level details work. The questioner cited an
article written by one of the founders of Stackoverflow.com, Joel Spolsky. His point is that modern programmers tend to focus on learning high level abstractions like Java and .NET and forget about the byte-level details. He goes on to cite stories about storage of strings in C to bolster his argument. The questioner wants to hear specific examples of how knowing C can make one a better programmer.
Joel is a terrifically smart guy. His degree from Yale is wonderful; he's got a Microsoft pedigree; his Fog Creek software company has been in business for years now; he's one of the best known bloggers about software on the web; he founded Stackoverflow with Jeff Atwood. Seeing how much time I spend there for no more renumeration than reputation points, the thrill of helping others, and learning a few things along the way I'd say that Joel is a tremendous success in this field by every measure.
I've even agreed with his point on
this blog. Who can argue in favor of ignorance? "Please tell the court when you stopped beating your wife, sir." There's no winning that point.
But the answer that I started to write in response to the question was negative. I recommended taking Joel with a grain of salt, since the post was written in 2001. I was surprised by this, because I'm generally in favor of learning regardless of its commercial payoff. So I decided to explore the idea a bit more here.
I learned C while I was still a mechanical engineer. The only language I ever knew was FORTRAN, of course. One day my employer disconnected us from the VAX computer we were all sharing and gave
individual Sun workstations. I had Unix at my fingertips. I was fortunate enough to sit in an aisle with a brilliant guy named Kim Perlotto. He worked in another group that didn't have anything to do with the numerical analysis gang that I ran with, but he was wonderfully smart and terrific to talk to. I didn't appreciate the computer science knowledge that was spewing out of him all the time, because I was so focused on engineering that I was too ignorant to even know what he was talking about. ("Software objects? Since they're 'soft', they must be deformable - maybe viscoplastic. We'll need an appropriate large strain measure, like Green-Lagrange and its energy conjugate stress measure, 2nd Piola-Kirchoff, maybe a viscoplastic material model by Kevin Walker or
Chaboche...")
I was walking by Kim's cube one day when I spied his well-thumbed copy of pre-ANSI K&R sitting on the corner of his desk. I picked it up and asked about it. He smiled and said, "Wanna learn C? You can borrow it if you like."
I struggled through that book. The whole idea of pointers escaped me for a while. I remember the day I figured out how function pointers worked. I was able to change the way a program worked simply by asking a pointer to refer to a new function. Magic! I was so happy when a friend complained about a C routine that was returning nonsense results from the input arrays that were passed in. My suggestion that C arrays being zero-based required subtracting one from the input pointers saved the day. I was able to bask in glory for an entire afternoon.
It was my first step away from engineering and towards software development. When C++ came along, it was close enough to C to entice me to learn it. (Much like you can entice a fruit-loving dog out of a crate with a wedge of apple.) I wrote C++ for a living when I first left engineering, allowing me to dip my toes into the vast ocean of object-orientation. Then Java came along, and now C#.
I'm happy to say that I did learn C and C++ well enough to feel comfortable and conversant with both. But if asked to write either one now I'd have to remember a lot of the syntactical subtleties. It's been eight years since I last wrote in either language.
So when I started to think about the Stackoverflow question, I was hard-pressed to think of a specific example of how knowing C has made me a
better programmer. It changed me into a programmer in the first place, but I don't write C anymore.
I'd say I'm a
much better programmer now than I was when I first picked up K&R. All those years of learning, context, and experience have helped. I find it impossible to tease my knowledge of C out of that tangle and hold it up to the light.
The follow-up question should be: Are all the layers of abstraction being used in software development harmful? Are the generations of programmers plying their trade today inferior to their
predecessors, who were worried about making every byte count? I would say "it depends", in the same way that
Brian Cox is both Isaac Newton's inferior and superior in physics. Brian is a brilliant guy who has internalized all that Newton gave us and has gone far beyond it, but he'd be the first to admit that he's standing on the shoulders of giants.
The difference is that it's not possible for Brian to practice physics without a thorough understanding of everything Newtonian. Calculus is the mathematics of dynamic systems. I think it is possible to make a living as a developer and never write C. The cursory knowledge of C to understand pointers and manual memory management that a skim through K&R would give you might be sufficient.
Peering behind the curtain to understand everything beneath the abstractions is a laudable impulse, but it has to be indulged given constraints of energy and time. There are only so many hours in the day, and lots to learn. Economists would tell us to be mindful of opportunity costs. Joel Spolsky is a smart guy, but his blog isn't a sacred text - yet.
No comments:
Post a Comment