3.2 Effect of Mistakes on a Development Schedule

Michael Jackson (the singer, not the computer scientist) sang that "One bad apple don't spoil the whole bunch, baby." That might be true for apples, but it isn't true for software. One bad apple can spoil your whole project. A group of ITT researchers reviewed 44 projects in 9 countries to examine the impact of 13 productivity factors on productivity (Vosburgh et al. 1984). The factors included the use of modern programming practices, code difficulty, performance requirements, level of client participation in requirements specification, personnel experience, and several others. They divided each of the factors into categories that you would expect to be associated with low, medium, and high performance. For example, they divided the "modern programming practices" factor into low use, medium use, and high use. Figure 3-1 on the next page shows what the researchers found for the "use of modern programming practices" factor. The longer you study Figure 3-1, the more interesting it becomes. The general pattern it shows is representative of the findings for each of the productivity factors studied. The ITT researchers found that projects in the categories that they expected to have poor productivity did in fact have poor productivity, such as the narrow range shown in the Low category in Figure 3-1. But productivity in the high-performance categories varied greatly, such as the wide range shown in the High category in Figure 3-1. Productivity of projects in the High category varied from poor to excellent.

That projects that were expected to have poor productivity do in fact have poor productivity shouldn't surprise you. But the finding that many of the projects expected to have excellent productivity actually have poor productivity just might be a surprise. What this graph and other graphs like it throughout the book show is that the use of any specific best practice is necessary but not sufficient for achieving maximum development speed. Even if you do a few things right, such as making high use of modern programming practices, you might still make a mistake that nullifies your productivity gains. When thinking about rapid development, it's tempting to think that all you have to do is identify the root causes of slow development and eliminate them—and then you'll have rapid development. The problem is that there aren't just a handful of root causes of slow development, and in the end trying to identify the root causes of slow development isn't very useful. It's like asking, 'What is the root cause of my not being able to run a 4-minute mile?' Well, I'm too old. I weigh too much. I'm too out of shape. I'm not willing to train that hard. I don't have a world-class coach or athletic facility. I wasn't all that fast even when I was younger. The list goes on and on. When you talk about exceptional achievements, the reasons that people don't rise to the top are simply too numerous to list. The Giga-Quote team in Case Study 3-1 made many of the mistakes that have plagued software developers since the earliest days of computing. The software-development road is mined with potholes, and the potholes you fall into partially determine how quickly or slowly you develop software. In software, one bad apple can spoil the whole bunch, baby. To slip into slow development, all you need to do is make one really big mistake; to achieve rapid development you need to avoid making any big mistakes. The next section lists the most common of those big mistakes.

Classic Mistakes Enumerated

Some ineffective development practices have been chosen so often, by so many people, with such predictable, bad results that they deserve to be called "classic mistakes." Most of the mistakes have a seductive appeal. Do you need to rescue a project that's behind schedule? Add more people! Do you want to reduce your schedule? Schedule more aggressively! Is one of your key contributors aggravating the rest of the team? Wait until the end of the project to fire him! Do you have a rush project to complete? Take whatever developers are available right now and get started as soon as possible! Developers, managers, and customers usually have good reasons for making the decisions they do, and the seductive appeal of the classic mistakes is part of the reason these mistakes have been made so often. But because they have been made so many times, their consequences have become easy to predict. And classic mistakes rarely produce the results that people hope for. This section enumerates three dozen classic mistakes. I have personally seen each of these mistakes made at least once, and I've made more than a few of them myself. Many of them crop up in Case Study 3-1. The common denominator of these mistakes is that you won't necessarily get rapid development if you avoid these mistakes, but you will definitely get slow development if you don't avoid them. If some of these mistakes sound familiar, take heart—many other people have made them too. Once you understand their effect on development speed you can use this list to help with your project planning and risk management. Some of the more significant mistakes are discussed in their own sections in other parts of this book. Others are not discussed further. For ease of reference, the list has been divided along the development-speed dimensions of people, process, product, and technology.

3.4 Escape from Gilligan's Island

A complete list of classic mistakes would go on for pages more, but those presented are the most common and the most serious. As Seattle University's David Umphress points out, watching most organizations attempt to avoid these classic mistakes seems like watching reruns of Gilligan's Island. At the beginning of each episode, Gilligan, the Skipper, or the Professor comes up with a cockamamie scheme to get off the island. The scheme seems as though it's going to work for a while, but as the episode unfolds, something goes wrong, and by the end of the episode the castaways find themselves right back where they started—stuck on the island. Similarly, most companies at the end of each project find that they have made yet another classic mistake and that they have delivered yet another project behind schedule or over budget or both.

Your Own List of Worst Practices

Be aware of the classic mistakes. Create lists of "worst practices" to avoid on future projects. Start with the list in this chapter. Add to the list by conducting project postmortems to learn from your team's mistakes. Encourage other projects within your organization to conduct postmortems so that you can learn from their mistakes. Exchange war stories with your colleagues in other organizations, and learn from their experiences. Display your list of mistakes prominently so that people will see it and learn not to make the same mistakes yet another time.

Case Study 4-1. Lack of Fundamentals

"We thought we had figured out what we were doing," Bill told Charles. "We did pretty well on version 3 of our Sales Bonus Program, SBP, which is the program we use to pay our field agents their commissions. But on version 4, everything fell apart. Bill had been the manager of SBP versions 1 through 4, and Charles was a consultant Giga-Safe had called in to help figure out why version 4 had been so problematic. "What were the differences between versions 3 and 4?" Charles asked. "We had problems with versions 1 and 2." Bill responded, "but by version 3 we felt that we had put our problems behind us. Development proceeded with hardly any problems at all. Our estimates were accurate, partly because we've learned to pad them with a 30-percent safety margin. The developers had almost no problems with forgotten tasks, tools, or design elements. Everything went great." "So what happened on version 4?" Charles prompted. "That was a different story. Version 3 was an evolutionary upgrade, but version 4 was a completely new product developed from scratch. "The team members tried to apply the lessons they'd learned on SBP versions 1 through 3. But partway through the project, the schedule began to slip. Technical tasks turned out to be more complicated than anticipated. Tasks that the developers had estimated would take 2 days instead took 2 to 3 weeks. There were problems with some new development tools, and the team lost ground fighting with them. The new team members didn't know all the team's rules, and they lost work and time because new team members kept overwriting each other's working files. In the end, no one could predict when the product would be ready until the day it actually was ready. Version 4 was almost 100 percent late." "That does sound pretty bad," Charles agreed. "You mentioned that you had had some problems with versions 1 and 2. Can you tell me about those projects?" "Sure," Bill replied. "On version 1 of SBP, the project was complete chaos. Total project estimates and task scheduling seemed almost random. Technical problems turned out to be harder than expected. Development tools that were supposed to save time actually added time to the schedule. The development team took one schedule slip after another, and no one knew when the product would be ready to release until a day or two before it actually was ready. In the end, the SBP team delivered the product about 100 percent over schedule." "That sounds a lot like what happened on version 4," Charles said "That's right," Bill shook his head. "I thought we had learned our lesson a long time ago." "What about version 2?" Charles asked. "On version 2, development proceeded more smoothly than on version 1. The project estimates and task schedules seemed more realistic, and the technical work seemed to be more under control. There were fewer problems with development tools, and the development team's work took about as long as they had estimated. They made up the estimation errors they did have through increased overtime. "But toward the end of the project, the team discovered several tasks that they hadn't included in their original estimates. They also discovered fundamental design flaws, which meant they had to rework 10 to 15 percent of the system. They took one big schedule slip to include the forgotten tasks and the rework. They finished that work, found a few more problems, took another schedule slip, and finally delivered the product about 30 percent late. That's when we learned to add a 30-percent safety margin to our schedules." "And then version 3 went smoothly?" Charles asked. "Right," Bill agreed. "I take it that versions 1 through 3 used the same code base?" Charles asked. "Yes." "Did versions 1 through 3 use the same team members?" "Yes, but several developers quit after version 3, so most of the version 4 team hadn't worked on the project before." "Thanks," Charles said. "That's all helpful." Charles spent the rest of the day talking with the development team and then met with Bill again that night. "What I've got to tell you might not be easy for you to hear," Charles said. "As a consultant, I see dozens of projects a year, and throughout my career I've seen hundreds of projects in more than a hundred organizations. The pattern you experienced with SBP versions 1 through 4 is actually fairly common. "Earlier, you implied that the developers weren't using automated source-code control, and I confirmed that this afternoon in my talks with your developers. I also confirmed that the development team doesn't use design or code reviews. The organization relies on seat-of-the-pants estimates even though more effective estimation methods are available." "OK," Bill said. "Those things are all true. But what do we need to do so that we never experience another project like version 4 again?" "That's the part that's going to be hard for you to hear," Charles said. "There isn't any one thing you need to do. You need to improve on the software development fundamentals or you'll see this same pattern again and again. You need to strengthen your foundation. On the management side, you need more effective scheduling, planning, tracking, and measurement. On the technical side, you need more effective requirements management, design, construction, and configuration management. And you need much stronger quality assurance." "But we did fine on version 3," Bill objected "That's right," Charles agreed. "You will do fine once in a while—when you're working on a familiar product with team members who have worked on the same product before. Most of the version 3 team had also worked on versions 1 and 2. One of the reasons that organizations think they don't need to master software-development fundamentals is that they do have a few successes. They can get pretty good at estimating and planning for a specific product, They think they're doing well, and they don't think that anyone else is doing any better. "But their development capability is built on a fragile foundation. They really only know how to develop one specific product in one specific way. When they are confronted with major changes in personnel, development tools, development environment, or product concept, that fragile development capability breaks down. Suddenly they find themselves back at square 1. That's what happened on SBP 4 when you had to rewrite the product from scratch with new developers. That's why your experiences on version 1 and version 4 were so similar". "I hadn't thought about it that way before, but maybe you're right," Bill said quietly. "That sounds like a lot of work, though. I don't know if we can justify it." "If you don't master the fundamentals, you'll do OK on the easy projects, but your hard projects will fall apart," Charles said, "and those are usually the ones you really care about."