tecznotes

Michal Migurski's notebook, listening post, and soapbox. Subscribe to this blog. Check out the rest of my site as well.

Jan 3, 2009 9:30am

programming: like pulling teeth

Tom linked to this 2002 conversation between Kent Beck (programmer) and Alan Cooper (interaction designer), "Extreme Programming vs. Interaction Design":

Kent Beck is known as the father of "extreme programming," a process created to help developers design and build software that effectively meets user expectations. Alan Cooper is the prime proponent of interaction design, a process with similar goals but different methodology. We brought these two visionaries together to compare philosophies, looking for points of consensus - and points of irreconcilable difference.

So they reach one point of irreconcilable difference about midway through:

Beck: OK, wait. I agreed with you very close to 100 percent, then you just stepped off the rails. I don't see why this new specialist has to do his or her job before construction begins?
Cooper: It has to happen first because programming is so hellishly expensive and the cost of programming is like the cost of pulling teeth. ... Building software isn't like slapping a shack together; it's more like building a 50-story office building or a giant dam.
Beck: I think it's nothing like those. If you build a skyscraper 50 stories high, you can't decide at that point, oh, we need another 50 stories and go jack it all up and put in a bigger foundation.
Cooper: That's precisely my point.
Beck: But in the software world, that's daily business.
Cooper: That's pissing money away and leaving scar tissue.

The interview might as well stop there, because this is the One, Central, Insurmountable difference between these two approaches toward development work. XP is adapted to a context where motivation is expensive and change is cheap. Interaction design (at least how Cooper explains it) is adapted to a context where motivation is cheap and change is expensive. It should be obvious that contexts of both kinds can exist in the world: there are situations where it's easy to return to previous decisions and modify them (software, for one), and there are other situations where it's not (e.g. buildings, dams).

I think in this particular case, Cooper is pretty much wrong. They're talking about software, so we know that change is relatively cheap in the context of this conversation - cheaper than buildings, anyway. They just start to touch on the way in which running code becomes an object to think with, a thing to learn from: "you can build a program that improves with age instead of having it born powerful and gradually degrading" (Beck). Cooper believes that it's possible to suss out requirements and desires by talking to people about what they want and getting sign-off on a promise, but experience shows that people's desires change in response to novelty. Henry Ford's "faster horse" line comes to mind, as does perpetual beta wunderkind Flickr: starting with something that works enough to elicit use and reaction trumps big, up-front design when your changes are low cost. There are contexts where change is high-cost: skyscrapers, movies, mass-produced objects. Software, especially the astounding percentage of the world's software that's written by companies for their own internal use, is not one of them.

Comments (21)

  1. Suffice to say that Alan doesn't speak for all interaction designers -- actually, he doesn't even speak for the designer who work at his eponymous company. To understand Alan, you have to appreciate that he worked essentially in a pre-web era, when what he says about programming was true. Perpetual beta wasn't possible when you're delivering final code on disks. It's also worth noting that he's updated his views somewhat since 2002 (largely at the encouragement of the rest of the IxD community, including the staff at his company), and that in no way does this view of interaction design represent current practice.

    Posted by peterme on Saturday, January 3 2009 7:47pm UTC

  2. I would say you are right in that context is a key element. Cooper is right about the class of projects where there is scale or complexity to worry about. Beck is right where the system is easily decomposed into independent parts. I watched a project Beck was working on at a company back around the time this interview was conducted. Classic large IT project with many dependencies. After looking at the requirements I chose to avoid the project. Some kinds of systems can't be tossed together, torn down and rebuilt time after time. It was obvious to me that this project was a situation where the underlying design was vital, and certain elements had to be thought out well in advance. That takes requirements gathering and up-front effort. The argument that software is mutable only holds to a certain level of complexity, after which software is like building a skyscraper. The project Beck was involved in was a monumental failure and took several years to clean up. A subsequent project at another company turned out similarly. The common thread was the type of system. As with any methodology, the key is knowing what situations are appropriate, and what situations require something new. XP and other agile methods fit well with many desktop and web projects, not so well with corporate systems. Blindly following a methodology as the only way to project salvation is a guarantee of failure. This is as true of XP as the waterfall model or structured programming. It's worth revisiting the Fred Brookes essay "No Silver Bullet" before staking everything on the latest methodology to jump off the boat and plant the flag of conquest. http://www.lips.utexas.edu/ee382c-15005/Readings/Readings1/05-Broo87.pdf

    Posted by mark on Saturday, January 3 2009 11:33pm UTC

  3. Mark, thanks for the comment - I'm really interested in what you're saying about Beck's project flops. One of them wasn't the infamous C3, was it? (http://en.wikipedia.org/wiki/Chrysler_Comprehensive_Compensation_System) My feeling is that what I wrote about the costs of change holds here, as well. I had a "shaka, his eyes opened" moment with the idea of bureaucracy when I read Implementation a few months back (http://mike.teczno.com/notes/books/implementation.html), and it explained the need for planning, rules, process, etc.: "The costs of bureaucracy - a preference for procedure over purpose or seeking the lowest common denominator - may emerge in a different light when they are viewed as part of the price paid for predictability of agreement over time among diverse participants." I'm learning that methodology follows from the political and social nature of a given project. It's unwise to assert that "change is cheap!" when in particular cases it may very well not be. I loved reading Fred Brooks's Mythical Man Month because it was such a poignant reminder of how circumstances (e.g. the availability of interactive debugging) can affect possibility. Do you think that Beck is bad at estimating costs in the abstract? Does he bite off more than he can chew because he's interested in advancing the case of XP methodology more than he's interested in a given client's needs?

    Posted by Michal Migurski on Sunday, January 4 2009 8:31am UTC

  4. Thanks Peter, you're spot on re: Cooper's (the man as well as the organization) evolving relationship with XP/agile/etc. As a designer at Cooper for the past 5 years I've seen this firsthand. Until a few years ago, the prevailing notion was that the "up-front" work of research and interaction design was cheaper, faster, and a better communication tool for project stakeholders than working in code. Especially in the large institutional projects that Cooper primarily worked on at the time. As rapid development and prototyping technologies have matured, the cheaper/faster argument is relative. With web apps, getting it "right" the first time is a) less important because you can perpetually adjust and b) less possible because the audience and uses of a product are so much more malleable. There's an ongoing discussion of how to integrate interaction design and agile development methods on the Cooper blog at http://www.cooper.com/journal/agile Michal, re: your assessment of the cost of change in internal software. I think when software being built for internal use is being built BY its internal users you're right. But a project of significant size brings with it a host of complications--layers of abstraction between the development team and the user; political considerations of management input and budget targets; people who are experts at doing the jobs the software is intended for but limited ability to communicate their needs or respond to interim solutions... It's situations like these where an appropriate amount of design research, strategic design thinking, and establishing consent through collaboration and iteration is invaluable to help get a project off on the right foot.

    Posted by Tim McCoy on Monday, January 5 2009 6:45am UTC

  5. Change is cheap, but it's not free. In carpentry there's a common maxim: "measure twice, cut once." It's a hell of a lot cheaper and easier to change a functional specification or UML class diagram before a line of code is written than to build a system only to find later doesn't meet the user's true requirements. Non-developers often have a difficult time articulating their needs because they aren't even aware of what is possible. Time and again I've worked with clients who in essence wanted to computerize some real-world form or process. They don't really know what they want other than to get some current process "on the web." It's our job as software developers to help guide them through the haze. Another problem is that the people who will be using the system on a daily basis are often not those we talk to in discovery to ascertain requirements, so end-users get stuck with a system that doesn't suit their needs because management doesn't understand their own problem.

    Posted by Jough Dempsey on Monday, January 5 2009 2:01pm UTC

  6. The thing that Beck failed already back in 2002 to understand is that to communicate the power of a new piece of software to its future users, one does not need to implement it. It can be done using paper prototypes, for example. And since building a representative storyboard of the user interface is about a 100 times faster that implementing it, iterating the user interface by writing code becomes costly and meaningless. However, the role of this storyboard or user interface specification is often misunderstood: 1) It does not need to be built in whole, and 2) it can and should change, when there is need for it to change. I'm not at all sure that Alan's points about the actual implementation are correct, but his points about achieving the correct functionality by specifying the user interface before implementation definitely is. If the user interface is designed first, every person in the project has the possibility to understand what the end state of the project is starting from day 1 of the implementation, and that saves a lot of money. If the implementation is done in an agile fashion, one just picks the most essential features from the spec. It may turn out that some of the designed functionality was partly wrong or some change may even make it unnecessary, but if the whole concept is not rotten, those parts are typically easy to fix. New features that take into account the inevitable change can be designed on the fly. The main part of the software, the 80% must be and is even easy to get right from the start, but that needs a lot of fast-paced iteration with pen and paper. Splitting that work to be a part of every implementation iteration doesn't work: The scope of the UI design has to be much larger than the scope of implementation in the iteration, since UI design is about 50-200 times faster. Designing the UI as a whole and well ahead of implementation keeps the biggest changes in the pen-and-paper phase, which is very cheap compared to deleted or completely refactored code. Thus the UI design does not benefit in being part of the implementation iteration, which again is in favour of doing it first.

    Posted by Karri-Pekka Laakso on Monday, January 5 2009 3:18pm UTC

  7. Thanks for writing about this, but my experience is that There Will Be Blood, one way or the other. Nothing worthwhile is cheap or easy in real life. It is necessary to suffer both long hours as a programmer to effect change, and long hours as a communicator and planner to try and prevent excessive change. Neither may be abstracted away with a magical approach. The best way to control costs is to be detail oriented in both thinking and communication. There are certainly many things that may be done to make a programming project work, but the human brain is up to the task, even without a popular methodology du jour.

    Posted by Gordon Luk on Monday, January 5 2009 6:34pm UTC

  8. the background on this website makes me nauseous

    Posted by blur on Monday, January 5 2009 11:26pm UTC

  9. Karri-Pekka, I'm particularly interested in the kinds of software systems which cannot be paper-prototyped. Imagine, for example, a prototype of a web browser. How would it be possible to communicate the use of the thing with a blank box? "Imagine eleventy billion pages of useful information behind this window..." I think there's a class of social or group or communication software that is essentially un-prototypeable because when it comes into contact with its users, they are changed and it is changed. The clumsy "perpetual beta" language coming out of the O'Reilly publishing house doesn't even begin to do justice to this effect, because it's really about a "perpetual now". The software that is most powerful is also hardest to communicate without implementation. I think this is practically a tautology actually. Would it be fair to draw an analogy between your pen-and-paper iterations and the term "user story", in the Agile / XP sense? Jough, I have to disagree with you on UML class diagrams ... they are, I swear, a total waste of time & energy. I agree that it's easy to change a spec before a line of code is written, but I also assert that it's monumentally harder to see how or why a spec needs to be changed without input from implemented code. A lot of things look great on paper, until they are given form and bump into fleshy human beings and are forced to adapt. Flickr used to be top-shelf example of this phenomenon to me, but now it's Twitter. If ever there was a critical, world-changing piece of software for which *no one* involved had a clue of its eventual utility, Twitter is it. I was mistakenly underwhelmed by Flickr when I first encountered it in its flash-chat incarnation, and I was absolutely sure that it would disappear along with cameraphones in general within a few short years.

    Posted by Michal Migurski on Tuesday, January 6 2009 12:03am UTC

  10. Clearly expressed and very close to my own thoughts on that dialogue. On the point of movies (and possibly other media) that are traditionally highly Big Design Up-Front, I was struck recently watching the making-of featurettes on the Lord of the Rings DVDs how one of the distinguishing features of that production seems to be how much was kept up flexible for so long. They were constantly doing rewrites, to the point where actors would ignore script pages from days before the actual day of shooting, knowing they would be superseded by later drafts. They continuously re-shaped and changed the story and their tools for telling it through "post-production" effects process that begin during the writing stage and continued up almost to the day of the films' releases. Ironically, since they had this extremely strong vision for what they were trying to make that was so clearly externalized in the books themselves, they were able to play extremely fast and loose with the traditional design documents of filmmaking from the screenplay to the production schedule. Also, the digitization of a growing portion of the process, from design and pre-visualization through the creation of whole characters and even onto the actual technical process of sound mixing and digital grading seemed to give the filmmakers a flexibility that we would normally associate more with software development than big budget moviemaking. I wonder if, in the long run, BDUF is simply the product of insufficient tools, which impose undue economic constraints and if it will, step-by-step, disappear from all creative fields as digitization sweeps through, lowering budgets and increasing flexibility and malleability.

    Posted by Greg Borenstein on Tuesday, January 6 2009 10:22am UTC

  11. Posted by Dorian Taylor on Tuesday, January 6 2009 10:28am UTC

  12. Greg, I like the metaphor of movie making to software development, though I draw a different conclusion from it. At the grossest level, I'd say LOTR is a canonical example of the value of design "up-front." Precisely because they had a well articulated spec for the story, characters, situations, motivations, goals, etc., the film's creators had the ability to iteratively refine the finished product. They could shoot scenes that made no sense until later digital manipulation allowed them to "compile." They could look at dailies and recognize when something wasn't working and adjust. The filmmakers weren't compelled to produce something that could be coherently viewed every X weeks because they knew what shape it needed to have at the end of the line. Most filmmaking works this way--there's the up-front work of understanding the characters, the story, the settings. That design must be communicated to the businesspeople with budget and filmmakers with practical experience. Only when there's a mutual understanding and commitment to the design does production get underway. And that production inevitably (and by design) results in adjustments to the original plan, sometimes minor, sometimes major. The metaphor is also good because, like software, there are so many different types of moviemaking, from small independent writer-director "start-ups" to huge multinational corporate productions. And filmmaking is littered with as many stories of production-gone-awry as software. Plus, just like in software, good film "design" always leaves the door open for a sequel.

    Posted by Tim McCoy on Tuesday, January 6 2009 4:34pm UTC

  13. Can I throw a tiny wrench in the gears here and mention television? One of the things that came to mind when I mentioned movies-as-BUFD in the original post, was the contrast with television series. Frequently, there's a pilot, then a premiere, then possibly a few episodes or even seasons until a show hits its stride and gains broad popularity. It's rare for a movie to be given a second chance upon release - once theatres decide a film isn't making money, it gets pulled. This is not quite so much the case with television programming. Could the presence of a make-or-break moment or a critical introduction be one difference between projects appropriate for iterative vs. up-front design?

    Posted by Michal Migurski on Tuesday, January 6 2009 8:44pm UTC

  14. @Michal, "Karri-Pekka, I'm particularly interested in the kinds of software systems which cannot be paper-prototyped. Imagine, for example, a prototype of a web browser. How would it be possible to communicate the use of the thing with a blank box? "Imagine eleventy billion pages of useful information behind this window..."" You may have a point there, but your example is not realistic. First of all, what are you prototyping in your example? A browser? The concept of web? A browser can be easily presented with a paper prototype - after all, to be convinced the user would not try all the billions of pages by himself, so that a representative set of pages would do. In addition, the web currently exists without any specific browser instance (IE/FF/Opera/younameit), so that it is explorable with the other browsers, if further proof is needed. And the concept of web - well, you wouldn't have had the knowledge about the billions of pages back in the 90's, when the concept was developed, so you would have presented a very limited set of pages anyway. There are very probably things that really cannot be paper prototyped with success, but at least it isn't a very typical case. "Would it be fair to draw an analogy between your pen-and-paper iterations and the term "user story", in the Agile / XP sense?" I don't see what the analogy could be. Can you explain what you mean?

    Posted by Karri-Pekka Laakso on Wednesday, January 7 2009 8:29am UTC

  15. Dorian, a great comment. A fully agree. Tim, I was thinking exactly in the same way. At an extreme, the Kent Beck style of making LOTR would have started with reading out loud "As a director, I want a film of two hobbits to achieve global success and fame" to the film crew and then turning the camera on. :)

    Posted by Karri-Pekka Laakso on Wednesday, January 7 2009 8:33am UTC

  16. Michal, TV certainly is a relevant comparison to software development. For one thing, the types of shows you're talking about, ones that are allowed to run for a long time trying to find their audience while simultaneously developing ever more complex on-going characters and plots? This is a new thing that is very much the result of a number of newly found efficiencies in production and distribution techniques. Digital production has made it massively cheaper to produce TV episodes, and the profusion of cable channels has made it massively cheaper to air them. This has put a premium on experimentation towards finding a show that hits deeply with a particular audience rather than shallowly with a large one. In the old days, when your show got 1/3 of the audience simply by being on one of the three networks, what used to matter was not offending anyone enough to make them change the channel. This meant that your show not only could not evolve fluidly in response to changing audience demand and aesthetic imperative, but it couldn't even really deviate too far from the existing successful formulas used by earlier shows. Just like the point I was trying to make with the discussion of LOTR, I think this example of TV shows demonstrates how technological improvements which serve to reduce the cost of production and distribution and therefore to reduce the risk of failure will therefore increase the advantage held by agile process over BDUF. TV shows become rapidly evolving and exploratory rather than clichebound. Movie epics can proceed via wild experimentation in all of their means and materials as long as the business case represented by their subject matter stays viable. In the process of producing LOTR, Peter Jackson made each movie upwards of five times: first as scripts, second as storyboards, third as an "animatic" (animated storyboard), fourth as a 3d animatic with models and action figures, fifth as series of sketched scenes with real actors on the in-progress sets, sixth in production, and seventh+ in editing and post production. Throughout all of the stages up to the last, relics of the earlier stages intermingled with the new ones. You can see bad hand animation standing in for special effects until the last few months of post production on Two Towers. While the books may constitute a spec of a kind, there is still an enormous possibility space of movies possible to be made from them. Jackson navigated this space via a series of ever more solid sketches, each of which could be viewed in-situ and evaluated. He put a high premium on getting a constant high level view on the current set of decisions to see if it was working and on having the technical flexibility to be able to change, improve, and eliminate things as late as possible. In many ways, LOTR came out as it did because Jackson transformed the process from being much more formal and comprehensive where a studio signs off on a big specific plan at the start to being much more like a series, where the studio agree to a scenario and a subject matter and then sends the movie makers off to evolve it towards an unknown end. I think this change, especially seeing it show up in both these places as well as in software development, speaks to the amazing leverage that technological advances give us for changing creative works in mid-creation. Agile is simply the most useful thinking we currently have for how to take best advantage of this new leverage.

    Posted by Greg Borenstein on Wednesday, January 7 2009 10:38am UTC

  17. Interesting discussion. I think it's a nice parallel to the chit chat about Domain Driven Design and Information Architecture that was started on Signal vs Noise, "iPhoto '09 and Domain Language": http://www.37signals.com/svn/posts/1507-iphoto-09-and-domain-language Between all these cases, there are different emphasis on where one should look for early validation of one's understanding of user / stakeholder input. As examples: is it validated in a purely conceptual diagram, in a detailed software model, in a paper prototype, or in running code? They also all have ideas about what can be discarded in the process of making something--and what's affordable to discard / what's fast and efficient to discard. In my own experience, it's hardest to really go back (to discard all the way) once real implementation of any kind has begun. If you have some code that "works" (for what it is), you have to be a real bad-ass to do select-all and delete to get a fresh start on making something that "works different." XP Refactoring is a great idea, but it can be a slow death by a thousand cuts if someone can't see when it's better to blow it all up and start over. I think you can be unheroic and still confidently discard your own IA diagrams and IxD prototypes. If anything, discarding is sometimes too easy. But, certain projects require someone to be really brave / stupid and actually build something that fails, and then be even braver and throw it all out, and start over. For example, I think the original Apple Stores were designed with this kind of intensity--but as physical prototypes.

    Posted by Jay Fienberg on Thursday, January 8 2009 4:08am UTC

  18. Dorian, The description of programming you offer sounds strongly influenced by Edsger Dijkstra's view of programming as mathematical proof (http://www.cs.utexas.edu/users/EWD/ewd10xx/EWD1036.PDF). There's a lot to learn from that linked 1982 talk, but I do think that Dijkstra is awfully dismissive of how programming can be fast, easy, and valuable even outside a scientific (or magical) understanding. Hardcore programmers have, I think, an overly rigid view of what's "right" in the sense that you use the term - often, and especially in cases such as communication or social software, what's right is what works, and what works is what's present. It's worthwhile just to show up. I mentioned Twitter earlier as a high-profile example of a software system whose evolution is tightly intertwined with that of its userbase - the past few days have seen a high-profile debate on security and web services that underlines how consensus-driven much software must be, and I think supports a view of programming as compromise and iteration. The entire comment thread at http://simonwillison.net/2009/Jan/2/adactio/ is worth a read, esp. if you're familiar with some of the social history of services authentication and the particular names involved here. Knowing that Alex Payne is a Twitter engineer is key to understanding the thread. Karri-Pekka, I don't believe that the web browser example is unrealistic at all. It's certainly at the far end of a scale that includes forms of groupware that introduce new kinds of behavior; interactions between artifacts and users make for new behaviors. It's certainly possible to get some understanding through paper prototyping, but there's a new thing that happens when computing "melts into behavior" (I think I got that from Adam Greenfield) which generally can't be predicted or prototyped. The web is a lot like this for me: a huge, ever-growing pulsating machine that's constantly under development and adaptation from a world of participants. Regarding the user story / pen-and-paper comment, I was asking about one of the techniques of XP & Agile that involves rendering requirements as user stories, short bits of functionality generally in the form of index cards and treated as a unit. The basic approach seems to involve estimation and prioritization based on these stories, with new ones coming in from the client and being weighed against the others. Greg, Looseness made possible by digital production in television is quite close the cheapness consideration I mention above. I think what's happening with development, especially in areas like the web where a lot of things like client/server architecture and cross-platform widgets can be taken for granted. Digital filmmaking is reaping some of the same benefits, since it's now economically feasible to just keep a camera rolling for multiple takes, until the right effect or performance has been achieved. I thank what I'm really learning from all these excellent comments is a greater awareness of context and its effects on cooperative projects. I particularly like Jay's comment that "you have to be a real bad-ass to do select-all and delete to get a fresh start". It's funny that the heaviest cost of all is the momentum of personal effort, the feeling that things shouldn't go to waste. Maybe not uncoincidentally, the other company that I've recently heard described as supporting the best throw-it-away-and-start-over cultures is also CEO'd by Steve Jobs - Pixar. They are able to produce great things and make a great first impression because their products all seem to have these story crisis moments, where the big picture comes clearly enough into focus for it to be obvious that it won't work. "Murder your darlings" said Sir Arthur Thomas Quiller-Couch - is that the superhuman ability to view any drastic change as cheap enough to be worthwhile?

    Posted by Michal Migurski on Thursday, January 8 2009 8:04am UTC

  19. Er, word #2 should be "think" in that second-to-last paragraph.

    Posted by Michal Migurski on Thursday, January 8 2009 8:06am UTC

  20. Kari-Pekka: Thanks. Michal: With respect to the word "right", I was thinking about it more defined as "behaves in all the ways we want and none of the ways we don't want" rather than "mathematically correct". That said, however, I think there is a gold mine of untapped value in languages like Haskell. The caveat, however, is that "practical" I/O-based applications - even Hello World, are a consummate pain to express - and we wonder why it doesn't get any traction. Anyway, I digress. What I'm concerned most about operates at a different scale. In Sketching User Experiences (http://billbuxton.com/ , I'm sure you have a copy already), Bill Buxton effectively defines a "sketch" as any dirt-cheap activity that affords the exploration of an open question, and to me sketching is just one intermediate point along my specificity continuum (for lack of a better term). Personae and scenarios could show up on one side of sketches and specs/test cases could show up on the other. I can empathize with you and your work - which I imagine is highly data-driven and dynamic - that there's very little value indeed in trying to sketch certain things out on paper. But, you might, for example, "sketch" in Processing some work to be completed later in PostScript (hey, it could happen ;). I find however that we often treat code artifacts as sacrosanct and are eager to coerce a dodgy prototype into production because "all we need to do is fill in the blanks". We tend to try to do this even though filling in these blanks would cost more effort than starting from scratch. It's even more pressing when a non-expert (and/or someone cutting the cheques) gets a hold of it. If I was to make a stab at diagnosing this phenomenon, I'd say that code, by default, doesn't scream "NOT FOR PRODUCTION" loud enough. Yes, it's natural for us to eschew wasted effort, but I think we often waste far more effort in short-sighted attempts to save it. If the exploratory code could somehow assert its sketchiness, we would be compelled to rewrite it for production (which I imagine we would bulldoze through, having the sticking points already worked out). The approach I'm playing with right now is to do my "sketch" code using languages that I would never dream of releasing production code in (currently Common Lisp). Although back to Beck vs Cooper - there's one point I really want to address. Beck's MO says "get it into the hands of the customer ASAP" - in fact principle #1 of the Agile Manifesto if I'm not mistaken reads "...early and continuous [sic] delivery of working [defined as??] software". Cooper's MO, however, says that not every software problem is an engineering/programming problem and there is room for a class of professional that can use a set of techniques (possibly even including writing exploratory code) that can, among other things, chop out most of the entropy and thrash that occurs in purely Agile/XP settings. The point I want to stress is there's no reason why you can't get interim deliverables into the hands of clients that aren't code but actually do have value. Another side effect of our sacrosanct treatment of code, I find, is that it unduly eclipses the value of every other artifact we produce along the way. It occurs to me that the process of software production could be made much more predictable (not necessarily less costly) and produce much more valuable results if we adopted a way of getting as much detail into the hands of programmers before they write their first line of code. Programmers, in my experience, work much more efficiently when they're front-loaded with as much detail as available about what to make - I know I sure do when I wear that hat. Engineers (Cooper makes this distinction in the lecture I linked earlier), however, are concerned with quantitative answers to discrete problems. Architects (Cooper's other distinction, presumably analogous to interaction designers, and echoed from MM-M) deal exclusively with where the product connects with people and do not concern themselves with the minutiae of the implementation. Cooper's salient point from that part of the lecture was that when we put everyone into a room and give them the title Developer, we lose, among other things, that clear division of labour. Anyway, I think that in particular is worth exploring. Finally, with respect to the pan-Agile claim that requirements are always changing, I flatly don't buy it. Some elements of the environment certainly do change and quite frequently. Others, however, haven't changed (and won't change) for decades. See Stewart Brand's discussion of shearing layers (http://en.wikipedia.org/wiki/Shearing_layers ) from his book and series called How Buildings Learn. Now that is a notion I would kill to see adopted into the process of software production. Any further requirements in severe flux would naturally stem from the nature of the agreement with the client, which I think often affords them to be far too fickle. Rather than agreeing to produce an artifact of software (and this goes back to Dijkstra's "proof" idea), agree instead to solve the client's business problem and provide the software, among other things, as a demonstration of the solution. Anyhow, I always enjoy reading Dijkstra (and find it somewhat tantalizing that it's hand-written in 1988); thanks for the link.

    Posted by Dorian Taylor on Thursday, January 8 2009 9:51am UTC

  21. Very good points, Dorian. There is, however, one aspect I feel I should point out. Some research suggests that it might be counterproductive to have a programmer "front-loaded with as much detail as available about what to make." [1] Similarly to looking at the whole code base preventing us from seeing the big picture, looking at the whole volume of available information might prevent us from seeing that big picture just as easily. This is not to disagree with what I believe your intent is behind the "front-loading" above. I believe that we do benefit from understanding the goal better. It's just that front-loading with "as much detail as possible" is, dare I say, likely to be counterproductive. Still, if given the option I'd probably ask for all the information I can get. After all, we humans are not all that rational. [1] "How to avoid impact from irrelevant and misleading information on your cost estimates", Simula Research Laboratory, 2006, http://simula.no/research/engineering/projects/best/seminars/estimation06/jorgensen/view (.ppt)

    Posted by Lasse Koskela on Friday, January 9 2009 6:23am UTC

Sorry, no new comments on old posts.

October 2024
Su M Tu W Th F Sa
  
  

Recent Entries

  1. Mapping Remote Roads with OpenStreetMap, RapiD, and QGIS
  2. How It’s Made: A PlanScore Predictive Model for Partisan Elections
  3. Micromobility Data Policies: A Survey of City Needs
  4. Open Precinct Data
  5. Scoring Pennsylvania
  6. Coming To A Street Near You: Help Remix Create a New Tool for Street Designers
  7. planscore: a project to score gerrymandered district plans
  8. blog all dog-eared pages: human transit
  9. the levity of serverlessness
  10. three open data projects: openstreetmap, openaddresses, and who’s on first
  11. building up redistricting data for North Carolina
  12. district plans by the hundredweight
  13. baby steps towards measuring the efficiency gap
  14. things I’ve recently learned about legislative redistricting
  15. oh no
  16. landsat satellite imagery is easy to use
  17. openstreetmap: robots, crisis, and craft mappers
  18. quoted in the news
  19. dockering address data
  20. blog all dog-eared pages: the best and the brightest

Archives