Critical acclaim for Steve McConnell’s CODE COMPLETE “Every half an age or so, you come across a book that short-circuits the school of experience and saves you years of purgatory.... I cannot adequately express how good this book really is. Code Complete is a pretty lame title for a work of brilliance.” PC Techniques “Microsoft Press has published what I consider to be the definitive book on software construction: Code Complete by Steve McConnell. This is a book that belongs on every software developer’s shelf.” Software Development “Every programmer should read this outstanding book.” Computer “If you are or aspire to be a professional programmer, this may be the wisest $35 investment you’ll ever make. Don’t stop to read the rest of this review: just run out and buy it. McConnell’s stated purpose is to narrow the gap between the knowledge of industry gurus and common commercial practice.... The amazing thing is that he succeeds.” IEEE Micro “Code Complete should be required reading for anyone...in software development.” C Users Journal “I’m encouraged to stick my neck out a bit further than usual and recommend, without reservation, Steve McConnell’s Code Complete.... My copy has replaced my API reference manuals as the book that’s closest to my keyboard while I work.” Windows Tech Journal “This well-written but massive tome is arguably the best single volume ever written on the practical aspects of software implementation.” Embedded Systems Programming “This is the best book on software engineering that I have yet read.” .EXE Magazine “This book deserves to become a classic, and should be compulsory reading for all developers, and those responsible for managing them.” Program Now PUBLISHED BY Microsoft Press A Division of Microsoft Corporation One Microsoft Way Redmond, Washington 98052-6399 Copyright © 1996 by Steve McConnell All rights reserved. No part of the contents of this book may be reproduced or transmitted in any form or by any means without the written permission of the publisher. Library of Congress Cataloging-in-Publication Data McConnell, Steve. Rapid Development : Taming Wild Software Schedules / Steve McConnell. p. cm. Includes index. ISBN 978-1-556-15900-8 1. Computer software--Development. I. Title. QA76.76.D47M393 1996 005.1'068--dc20 96-21517 CIP Printed and bound in the United States of America. 14 15 16 17 18 19 MLML 06 05 04 03 02 01 Distributed in Canada by Penguin Books Canada Limited. A CIP catalogue record for this book is available from the British Library. Microsoft Press books are available through booksellers and distributors worldwide. For further information about international editions, contact your local Microsoft Corporation office or contact Microsoft Press International directly at fax (425) 936-7329. Visit our Web site at www.microsoft.com/mspress. Send comments to mspinput@microsoft.com. AT&T is a registered trademark of American Telephone and Telegraph Company. Apple and Macintosh are registered trademarks of Apple Computer, Inc. Boeing is a registered trademark of The Boeing Company. Borland and Delphi are registered trademarks of Borland International, Inc. FileMaker is a registered trademark of Claris Corporation. Dupont is a registered trademark of E.I. Du Pont de Nemours and Company. Gupta is a registered trademark of Gupta Corporation (a California Corporation). HewlettPackard is a registered trademark of Hewlett-Packard Company. Intel is a registered trademark of Intel Corporation. IBM is a registered trademark of International Business Machines Corporation. ITT is a registered trademark of International Telephone and Telegraph Corporation. FoxPro, Microsoft, MS-DOS, PowerPoint, Visual Basic, Windows, and Windows NT are registered trademarks and Visual FoxPro is a trademark of Microsoft Corporation. Powersoft is a registered trademark and PowerBuilder is a trademark of PowerSoft Corporation. Raytheon is a registered trademark of Raytheon Company. Other product and company names mentioned herein may be the trademarks of their respective owners. The example companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious. No association with any real company, organization, product, domain name, e-mail address, logo, person, place, or event is intended or should be inferred. Acquisitions Editor: David J. Clark Project Editor: Jack Litewka Contents Case Studies ix Reference Tables x Preface xiii PART I EFFICIENT DEVELOPMENT 1 Welcome to Rapid Development 1 What Is Rapid Development? • Attaining Rapid Development 2 Rapid-Development Strategy 5 General Strategy for Rapid Development • Four Dimensions of Development Speed • General Kinds of Fast Development • Which Dimension Matters the Most? • An Alternative Rapid-Development Strategy • Further Reading 3 Classic Mistakes 29 Case Study in Classic Mistakes • Effect of Mistakes on a Development Schedule • Classic Mistakes Enumerated • Escape from Gilligan’s Island • Further Reading 4 Software-Development Fundamentals 51 Management Fundamentals • Technical Fundamentals • QualityAssurance Fundamentals • Following the Instructions • Further General Reading 5 Risk Management 81 Elements of Risk Management • Risk Identification • Risk Analysis • Risk Prioritization • Risk Control • Risk, High Risk, and Gambling • Further Reading PART II RAPID DEVELOPMENT 6 Core Issues in Rapid Development 109 Does One Size Fit All? • What Kind of Rapid Development Do You Need? • Odds of Completing on Time • Perception and Reality • Where the Time Goes • Development-Speed Trade-Offs • Typical ScheduleImprovement Pattern • Onward to Rapid Development • Further Reading v Contents 7 Lifecycle Planning 133 Pure Waterfall • Code-and-Fix • Spiral • Modified Waterfalls • Evolutionary Prototyping • Staged Delivery • Design-to-Schedule • Evolutionary Delivery • Design-to-Tools • Commercial Off-the-Shelf Software • Choosing the Most Rapid Lifecycle for Your Project • Further Reading 8 Estimation 163 The Software-Estimation Story • Estimation-Process Overview • Size Estimation • Effort Estimation • Schedule Estimation • Ballpark Schedule Estimates • Estimate Refinement • Further Reading 9 Scheduling 205 Overly Optimistic Scheduling • Beating Schedule Pressure • Further Reading 10 Customer-Oriented Development 233 Customers’ Importance to Rapid Development • Customer-Oriented Practices • Managing Customer Expectations • Further Reading 11 Motivation 249 Typical Developer Motivations • Using the Top Five Motivation Factors • Using Other Motivation Factors • Morale Killers • Further Reading 12 Teamwork 273 Software Uses of Teamwork • Teamwork’s Importance to Rapid Development • Creating a High-Performance Team • Why Teams Fail • Long-Term Teambuilding • Summary of Teamwork Guidelines • Further Reading 13 Team Structure 297 Team-Structure Considerations • Team Models • Managers and Technical Leads • Further Reading 14 Feature-Set Control 319 Early Project: Feature-Set Reduction • Mid-Project: Feature-Creep Control • Late Project: Feature Cuts • Further Reading 15 Productivity Tools 345 Role of Productivity Tools in Rapid Development • Productivity-Tool Strategy • Productivity-Tool Acquisition • Productivity-Tool Use • Silver-Bullet Syndrome • Further Reading 16 Project Recovery 371 General Recovery Options • Recovery Plan • Further Reading vi Contents PART III BEST PRACTICES Introduction to Best Practices 390 Organization of Best-Practice Chapters • Summary of Best-Practice Candidates • Summary of Best-Practice Evaluations 17 Change Board 403 18 Daily Build and Smoke Test 405 19 Designing for Change 415 20 Evolutionary Delivery 425 21 Evolutionary Prototyping 433 22 Goal Setting 445 23 Inspections 447 24 Joint Application Development (JAD) 449 25 Lifecycle Model Selection 465 26 Measurement 467 27 Miniature Milestones 481 28 Outsourcing 491 29 Principled Negotiation 503 30 Productivity Environments 505 31 Rapid-Development Languages (RDLs) 515 32 Requirements Scrubbing 525 33 Reuse 527 34 Signing Up 539 35 Spiral Lifecycle Model 547 36 Staged Delivery 549 37 Theory-W Management 559 38 Throwaway Prototyping 569 39 Timebox Development 575 40 Tools Group 585 41 Top-10 Risks List 587 vii Contents 42 User-Interface Prototyping 589 43 Voluntary Overtime 599 Bibliography 609 Index 625 viii Case Studies 2-1. Rapid Development Without a Clear Strategy 6 2-2. Rapid Development with a Clear Strategy 3-1. Classic Mistakes 25 29 4-1. Lack of Fundamentals 52 5-1. Lack of Contractor Risk Management 82 5-2. Systematic Risk Management 103 6-1. Wandering in the Fuzzy Front End 124 7-1. Ineffective Lifecycle Model Selection 134 7-2. Effective Lifecycle Model Selection 159 8-1. Seat-of-the-Pants Project Estimation 164 8-2. Careful Project Estimation 200 9-1. A Successful Schedule Negotiation 229 10-1. The Requirements Club 234 10-2. The Requirements Club Revisited 246 11-1. A Disheartening Lunch with the Boss 250 11-2. A Highly Motivational Environment 270 12-1. You Call This a Team? 274 12-2. A High-Performance Team 277 12-3. Typical Team-Member Selection 282 12-4. A Second High-Performance Team 294 13-1. Mismatch Between Project Objectives and Team Structure 297 13-2. Good Match Between Project Objectives and Team Structure 315 14-1. Managing Change Effectively 342 15-1. Ineffective Tool Use 346 15-2. Effective Tool Use 368 16-1. An Unsuccessful Project Recovery 372 16-2. A Successful Project Recovery 385 ix Reference Tables 2-1. Characteristics of Standard Approaches to ScheduleOriented Development 18 2-2. Code-Like-Hell Approach Compared to This Book’s Approach 24 3-1. Summary of Classic Mistakes 49 5-1. Levels of Risk Management 84 5-2. Most Common Schedule Risks 86 5-3. Potential Schedule Risks 87 5-4. Example of a Risk-Assessment Table 92 5-5. Example of a Prioritized Risk-Assessment Table 95 5-6. Means of Controlling the Most Common Schedule Risks 98 5-7. Example of a “Top-10 Risks List” 101 6-1. Approximate Activity Breakdown by Size of Project 122 7-1. Lifecycle Model Strengths and Weaknesses 156 8-1. Estimate Multipliers by Project Phase 169 8-2. Function-Point Multipliers 176 8-3. Example of Computing the Number of Function Points 177 8-4. Example of a Risk-Quantification Estimate 8-5. Example of a Case-Based Estimate 180 181 8-6. Example of a Confidence-Factor Estimate 182 8-7. Exponents for Computing Schedules from Function Points 185 8-8. Shortest Possible Schedules 190 8-9. Efficient Schedules 194 8-10. Nominal Schedules 196 8-11. Example of a Single-Point–Estimation History 197 8-12. Example of a Range-Estimation History 198 9-1. Scheduling History of Word for Windows 1.0 208 11-1. Comparison of Motivators for Programmer Analysts vs. Managers and the General Population 252 x Reference Tables 11-2. Team Performance Ranked Against Objectives That Teams Were Told to Optimize 256 12-1. Practical Guidelines for Team Members and Leaders 295 13-1. Team Objectives and Team Structures 301 15-1. Example of Savings Realized by Switching from a 3GL to a 4GL for 50 Percent of a 32,000 LOC Project 361 15-2. Example of Savings Realized by Switching from a 3GL to a 4GL for 100 Percent of a 32,000 LOC Project 362 III-1. Summary of Best-Practice Candidates 396 III-2. Summary of Best-Practice Evaluations 400 26-1. Examples of Kinds of Measurement Data 470 26-2. Example of Time-Accounting Activities 472 28-1. Vendor-Evaluation Questionnaire 497 28-2. Contract Considerations 498 30-1. Differences in Office Environments Between Best and Worst Performers in a Programming Competition 512 31-1. Approximate Function-Points to Lines-of-Code Conversions 517 31-2. Approximate Language Levels 519 36-1. Example of a Staged-Delivery Schedule for a Word Processor 552 37-1. Project Stakeholders and Their Objectives 560 37-2. Steps in Theory-W Project Management 562 xi Preface Preface Software developers are caught on the horns of a dilemma. One horn of the dilemma is that developers are working too hard to have time to learn about effective practices that can solve most development-time problems; the other horn is that they won’t get the time until they do learn more about rapid development. Other problems in our industry can wait. It’s hard to justify taking time to learn more about quality when you’re under intense schedule pressure to “just ship it.” It’s hard to learn more about usability when you’ve worked 20 days in a row and haven’t had time to see a movie, go shopping, work out, read the paper, mow your lawn, or play with your kids. Until we as an industry learn to control our schedules and free up time for developers and managers to learn more about their professions, we will never have enough time to put the rest of our house in order. The development-time problem is pervasive. Several surveys have found that about two-thirds of all projects substantially overrun their estimates (Lederer and Prasad 1992, Gibbs 1994, Standish Group 1994). The average large project misses its planned delivery date by 25 to 50 percent, and the size of the average schedule slip increases with the size of the project (Jones 1994). Year after year, development-speed issues have appeared at the tops of lists of the most critical issues facing the software-development community (Symons 1991). Although the slow-development problem is pervasive, some organizations are developing rapidly. Researchers have found 10-to-1 differences in productivity between companies within the same industries, and some researchers have found even greater variations (Jones 1994). The purpose of this book is to provide the groups that are currently on the “1” side of that 10-to-1 ratio with the information they need to move toward the “10” side of the ratio. This book will help you bring your projects under control. It will help you deliver more functionality to your users in less time. You don’t have to read the whole book to learn something useful; no matter what state your project is in, you will find practices that will enable you to improve its condition. xiii Preface Who Should Read This Book? Slow development affects everyone involved with software development, including developers, managers, clients, and end-users—even their families and friends. Each of these groups has a stake in solving the slow-development problem, and there is something in this book for each of them. This book is intended to help developers and managers know what’s possible, to help managers and clients know what’s realistic, and to serve as an avenue of communication between developers, managers, and clients so that they can tailor the best possible approach to meet their schedule, cost, quality, and other goals. Technical Leads This book is written primarily with technical leads or team leads in mind. If that’s your role, you usually bear primary responsibility for increasing the speed of software development, and this book explains how to do that. It also describes the development-speed limits so that you’ll have a firm foundation for distinguishing between realistic improvement programs and wishful-thinking fantasies. Some of the practices this book describes are wholly technical. As a technical lead, you should have no problem implementing those. Other practices are more management oriented, and you might wonder why they are included here. In writing the book, I have made the simplifying assumption that you are Technical Super Lead—faster than a speeding hacker; more powerful than a loco-manager; able to leap both technical problems and management problems in a single bound. That is somewhat unrealistic, I know, but it saves both of us from the distraction of my constantly saying, “If you’re a manager, do this, and if you’re a developer, do that.” Moreover, assuming that technical leads are responsible for both technical and management practices is not as far-fetched as it might sound. Technical leads are often called upon to make recommendations to upper management about technically oriented management issues, and this book will help prepare you to do that. Individual Programmers Many software projects are run by individual programmers or self-managed teams, and that puts individual technical participants into de facto technical-lead roles. If you’re in that role, this book will help you improve your development speed for the same reasons that it will help bona fide technical leads. xiv Preface Managers Managers sometimes think that achieving rapid software development is primarily a technical job. If you’re a manager, however, you can usually do as much to improve development speed as your developers can. This book describes many management-level rapid-development practices. Of course, you can also read the technically oriented practices to understand what your developers can do at their level. Key Benefits of This Book I conceived of this book as a Common Sense for software developers. Like Thomas Paine’s original Common Sense, which laid out in pragmatic terms why America should secede from Mother England, this book lays out in pragmatic terms why many of our most common views about rapid development are fundamentally broken. These are the times that try developers’ souls, and, for that reason, this book advocates its own small revolution in software-development practices. My view of software development is that software projects can be optimized for any of several goals—lowest defect rate, fastest execution speed, greatest user acceptance, best maintainability, lowest cost, or shortest development schedule. Part of an engineering approach to software is to balance trade-offs: Can you optimize for development time by cutting quality? By cutting usability? By requiring developers to work overtime? When crunch time comes, how much schedule reduction can you ultimately achieve? This book helps answer such key trade-off questions as well as other questions. Improved development speed. You can use the strategy and best practices described in this book to achieve the maximum possible development speed in your specific circumstances. Over time, most people can realize dramatic improvements in development speed by applying the strategies and practices described in this book. Some best practices won’t work on some kinds of projects, but for virtually any kind of project, you’ll find other best practices that will. Depending on your circumstances, “maximum development speed” might not be as fast as you’d like, but you’ll never be completely out of luck just because you can’t use a rapid-development language, are maintaining legacy code, or work in a noisy, unproductive environment. Rapid-development slant on traditional topics. Some of the practices described in this book aren’t typically thought of as rapid-development practices. Practices such as risk management, software-development fundamentals, and lifecycle planning are more commonly thought of as “good software-development practices” than as rapid-development methodologies. xv Preface These practices, however, have profound development-speed implications that in many cases dwarf those of the so-called rapid-development methods. This book puts the development-speed benefits of these practices into context with other practices. Practical focus. To some people, “practical” means “code,” and to those people I have to admit that this book might not seem very practical. I’ve avoided code-focused practices for two reasons. First, I’ve already written 800 pages about effective coding practices in Code Complete (Microsoft Press, 1993). I don’t have much more to say about them. Second, it turns out that many of the critical insights about rapid development are not code-focused; they’re strategic and philosophical. Sometimes, there is nothing more practical than a good theory. Quick-reading organization. I’ve done all I can to present this book’s rapid-development information in the most practical way possible. The first 400 pages of the book (Parts I and II) describe a strategy and philosophy of rapid development. About 50 pages of case studies are integrated into that discussion so that you can see how the strategy and philosophy play out in practice. If you don’t like case studies, they’ve been formatted so that you can easily skip them. The rest of the book consists of a set of rapiddevelopment best practices. The practices are described in quick-reference format so that you can skim to find the practices that will work best on your projects. The book describes how to use each practice, how much schedule reduction to expect, and what risks to watch out for. The book also makes extensive use of marginal icons and text to help you quickly find additional information related to the topic you’re reading about, avoid classic mistakes, zero in on best practices, and find quantitative support for many of the claims made in this book. A new way to think about the topic of rapid development. In no other area of software development has there been as much disinformation as in the area of rapid development. Nearly useless development practices have been relentlessly hyped as “rapid-development practices,” which has caused many developers to become cynical about claims made for any development practices whatsoever. Other practices are genuinely useful, but they have been hyped so far beyond their real capabilities that they too have contributed to developers’ cynicism. Each tool vendor and each methodology vendor want to convince you that their new silver bullet will be the answer to your development needs. In no other software area do you have to work as hard to separate the wheat from the chaff. This book provides guidelines for analyzing rapid-development information and finding the few grains of truth. xvi Preface This book provides ready-made mental models that will allow you to assess what the silver-bullet vendors tell you and will also allow you to incorporate new ideas of your own. When someone comes into your office and says, “I just heard about a great new tool from the GigaCorp Silver Bullet Company that will cut our development time by 80 percent!” you will know how to react. It doesn’t matter that I haven’t said anything specifically about the GigaCorp Silver Bullet Company or their new tool. By the time you finish this book, you’ll know what questions to ask, how seriously to take GigaCorp’s claims, and how to incorporate their new tool into your development environment, if you decide to do that. Unlike other books on rapid development, I’m not asking you to put all of your eggs into a single, one-size-fits-all basket. I recognize that different projects have different needs, and that one magic method is usually not enough to solve even one project’s schedule problems. I have tried to be skeptical without being cynical—to be critical of practices’ effectiveness but to stop short of assuming that they don’t work. I revisit those old, overhyped practices and salvage some that are still genuinely useful—even if they aren’t as useful as they were originally promised to be. Why is this book about rapid development so big? Developers in the IS, shrink-wrap, military, and software-engineering fields have all discovered valuable rapid-development practices, but the people from these different fields rarely talk to one another. This book collects the most valuable practices from each field, bringing together rapid-development information from a wide variety of sources for the first time. Does anyone who needs to know about rapid development really have time to read 650 pages about it? Possibly not, but a book half as long would have had to be oversimplified to the point of uselessness. To compensate, I’ve organized the book so that it can be read quickly and selectively—you can read short snippets while you’re traveling or waiting. Chapters 1 and 2 contain the material that you must read to understand how to develop products more quickly. After you read those chapters, you can read whatever interests you most. Why This Book Was Written Clients’ and managers’ first response to the problem of slow development is usually to increase the amount of schedule pressure and overtime they heap on developers. Excessive schedule pressure occurs in about 75 percent of all large projects and in close to 100 percent of all very large projects (Jones 1994). Nearly 60 percent of developers report that the level of stress they feel is increasing (Glass 1994c). The average developer in the U.S. works from 48 to 50 hours per week (Krantz 1995). Many work considerably more. xvii Preface In this environment, it isn’t surprising that general job satisfaction of software developers has dropped significantly in the last 15 years (Zawacki 1993), and at a time when the industry desperately needs to be recruiting additional programmers to ease the schedule pressure, developers are spreading the word to their younger sisters, brothers, and children that our field is no fun anymore. Clearly our field can be fun. Many of us got into it originally because we couldn’t believe that people would actually pay us to write software. But something not-so-funny happened on the way to the forum, and that something is intimately bound up with the topic of rapid development. It’s time to start shoring up the dike that separates software developers from the sea of scheduling madness. This book is my attempt to stick a few fingers into that dike, holding the madness at bay long enough to get the job started. Acknowledgments Heartfelt thanks go first to Jack Litewka, the project editor, for making the creation of this book a thoroughly constructive and enjoyable experience. Thanks also go to Kim Eggleston for the book’s design, to Michael Victor for the diagrams, and to Mark Monlux for the terrific illustrations. Sally Brunsman, David Clark, Susanne Freet, Dean Holmes, Wendy Maier, and Heidi Saastamoinen also helped this project go smoothly. Literally dozens of other people contributed to this book in one way or another; I didn’t have personal contact with any of them, but I appreciate their contributions, too. (Chief among these, I am told, are layout artist Jeannie McGivern and production manager Jean Trenary of ArtSource and Microsoft Press’s proof/copyedit platoon supervised by Brenda Morris: Richard Carey, Roger LeBlanc, Patrick Forgette, Ron Drummond, Patricia Masserman, Paula Thurman, Jocelyn Elliott, Deborah Long, and Devon Musgrave.) Microsoft Corporation’s technical library provided invaluable aid in digging up the hundreds of books and articles that laid the foundation for this book. Keith Barland spearheaded that effort, making my research efforts much less arduous and time-consuming than they otherwise might have been. Other people at the library who helped included Janelle Jones, Christine Shannon, Linda Shaw, Amy Victor, Kyle Wagner, Amy Westfall, and Eilidh Zuvich. I expound on the virtue of reviews in several places in this book, and this book has benefited greatly from extensive peer reviews. Al Corwin, Pat Forman, Tony Garland, Hank Meuret, and Matt Peloquin stuck with the project from beginning to end. Thanks to them for seeing that the book you xviii Preface hold in your hands doesn’t look very much like the book I originally set out to write! I also received valuable comments from Wayne Beardsley, Duane Bedard, Ray Bernard, Bob Glass, Sharon Graham, Greg Hitchcock, Dave Moore, Tony Pisculli, Steve Rinn, and Bob Stacy—constructive critics, all. David Sommer (age 11) came up with the idea for the last panel of Figure 14-3. Thanks, David. And, finally, I’d like to thank my wife, Tammy, for her moral support and good humor. I have to start working on my third book immediately so that she will stop elbowing me in the ribs and calling me a Two-Time Author! Bellevue, Washington June 1996 xix 3 Classic Mistakes Contents 3.1 Case Study in Classic Mistakes 3.2 Effect of Mistakes on a Development Schedule 3.3 Classic Mistakes Enumerated 3.4 Escape from Gilligan’s Island Related Topics Risk management: Chapter 5 Rapid-development strategy: Chapter 2 SOFTWARE DEVELOPMENT IS A COMPLICATED ACTIVITY. A typical software project can present more opportunities to learn from mistakes than some people get in a lifetime. This chapter examines some of the classic mistakes that people make when they try to develop software rapidly. 3.1 Case Study in Classic Mistakes The following case study is a little bit like the children’ s picture puzzles in which you try to find all the objects whose names begin with the letter “M”. How many classic mistakes can you find in the following case study? Case Study 3-1. Classic Mistakes Mike, a technical lead for Giga Safe, was eating lunch in his office and looking out his window on a bright April morning. “Mike, you got the funding for the Giga-Quote program! Congratulations!” It was Bill, Mike’ s boss at Giga, a medical insurance company. “The executive committee loved the idea of automating our medical insurance quotes. It also (continued) 29 Chapter 3: Classic Mistakes Case Study 3-1. Classic Mistakes, continued loved the idea of uploading the day’s quotes to the head office every night so that we always have the latest sales leads online. I’ve got a meeting now, but we can discuss the details later. Good job on that proposal!” Mike had written the proposal for the Giga-Quote program months earlier, but his proposal had been for a stand-alone PC program without any ability to communicate with the head office. Oh well. This would give him a chance to lead a client-server project in a modern GUI environment—something he had wanted to do. They had almost a year to do the project, and that should give them plenty of time to add a new feature. Mike picked up the phone and dialed his wife’s number. “Honey, let’s go out to dinner tonight to celebrate...” The next morning, Mike met with Bill to discuss the project. “OK, Bill. What’s up? This doesn’t sound like quite the same proposal I worked on.” Bill felt uneasy. Mike hadn’t participated in the revisions to the proposal, but there hadn’t been time to involve him. Once the executive committee heard about the Giga-Quote program, they’d taken over. “The executive committee loves the idea of building software to automate medical insurance quotes. But they want to be able to transfer the field quotes into the mainframe computer automatically. And they want to have the system done before our new rates take effect January 1. They moved the software-complete date you proposed up from March 1 to November 1, which shrinks your schedule to 6 months.” Mike had estimated the job would take 12 months. He didn’t think they had much chance of finishing in 6 months, and he told Bill so. “Let me get this straight,” Mike said. “It sounds like you’re saying that the committee added a big communications requirement and chopped the schedule from 12 months to 6?” Bill shrugged. “I know it will be a challenge, but you’re creative, and I think you can pull it off. They approved the budget you wanted, and adding the communications link can’t be that hard. You asked for 36 staff-months, and you got it. You can recruit anyone you like to work on the project and increase the team size, too.” Bill told him to go talk with some other developers and figure out a way to deliver the software on time. Mike got together with Carl, another technical lead, and they looked for ways to shorten the schedule. “Why don’t you use C++ and object-oriented design?” Carl asked. “You’ll be more productive than with C, and that should shave a month or two off the schedule.” Mike thought that sounded good. Carl also knew of a report-building tool that was supposed to cut development time in half. The project had a lot of reports, so those two changes would get them down to about 9 months. They were due for newer, faster hardware, too, and that could shave off a couple weeks. If he could recruit really top-notch developers, that might bring them down to about 7 months. That should be close enough. Mike took his findings back to Bill. (continued) 30 Case Study 3-1. Classic Mistakes, continued “Look,” Bill said. “Getting the schedule down to 7 months is good, but it isn’t good enough. The committee was very clear about the 6-month deadline. They didn’t give me a choice. I can get you the new hardware you want, but you and your team are going to have to find some way to get the schedule down to 6 months or work some overtime to make up the difference.” Mike considered the fact that his initial estimate had just been a ballpark guess and thought maybe he could pull it off in 6 months. “OK, Bill. I’ll hire a couple of sharp contractors for the project. Maybe we can find some people with communications experience to help with uploading data from the PC to the mainframe.” By May 1, Mike had put a team together. Jill, Sue, and Tomas were solid, inhouse developers, and they happened to be unassigned. He rounded out the team with Keiko and Chip, two contractors. Keiko had experience both on PCs and the kind of mainframe they would interface with. Jill and Tomas had interviewed Chip and recommended against hiring him, but Mike was impressed. He had communications experience and was available immediately, so Mike hired him anyway. At the first team meeting, Bill told the team that the Giga-Quote program was strategically important to the Giga Safe Corporation. Some of the top people in the company would be watching them. If they succeeded, there would be rewards all around. He said he was sure that they could pull it off. After Bill’s pep talk, Mike sat down with the team and laid out the schedule. The executive committee had more or less handed them a specification, and they would spend the next 2 weeks filling in the gaps. Then they’d spend 6 weeks on design, which would leave them 4 months for construction and testing. His seat-of-the-pants estimate was that the final product would consist of about 30,000 lines of code in C++. Everyone around the table nodded agreement. It was ambitious, but they’d known that when they signed up for the project. The next week, Mike met with Stacy, the testing lead. She explained that they should begin handing product builds over to testing no later than September 1, and they should aim to hand over a feature-complete build by October 1. Mike agreed. The team finished the requirements specification quickly, and dove into design. They came up with a design that seemed to make good use of C++’s features. They finished the design by June 15, ahead of schedule, and began coding like crazy to meet their goal of a first-release-to-testing by September 1. Work on the project wasn’t entirely smooth. Neither Jill nor Tomas liked Chip, and Sue had also complained that he wouldn’t let anyone near his code. Mike (continued) 31 Chapter 3: Classic Mistakes Case Study 3-1. Classic Mistakes, continued attributed the personality clashes to the long hours everyone was working. Nevertheless, by early August, they reported that they were between 85percent and 90-percent done. In mid-August, the actuarial department released the rates for the next year, and the team discovered that they had to accommodate an entirely new rate structure. The new rating method required them to ask questions about exercise habits, drinking habits, smoking habits, recreational activities, and other factors that hadn’t been included in the rating formulas before. C++, they thought, was supposed to shield them from the effects of such changes. They had been counting on just plugging some new numbers into a ratings table. But they had to change the input dialogs, database design, database access, and communications objects to accommodate the new structure. As the team scrambled to retrofit their design, Mike told Stacy that they might be a few days late releasing the first build to testing. The team didn’t have a build ready by September 1, and Mike assured Stacy that the build was only a day or two away. Days turned into weeks. The October 1 deadline for handing over the featurecomplete build to testing came and went. Development still hadn’t handed over the first build to testing. Stacy called a meeting with Bill to discuss the schedule. “We haven’t gotten a build from development yet,” she said. “We were supposed to get our first build on September 1, and since we haven’t gotten one yet, they’ve got to be at least a full month behind schedule. I think they’re in trouble.” “They’re in trouble, all right,” Bill said. “Let me talk to the team. I’ve promised 600 agents that they would have this program by November 1. We have to get that program out in time for the rate change.” Bill called a team meeting. “This is a fantastic team, and you should be meeting your commitments,” he told them. “I don’t know what’s gone wrong here, but I expect everyone to work hard and deliver this software on time. You can still earn your bonuses, but now you’re going to have to work for them. As of now, I’m putting all of you on a 6-day-per-week, 10-hour-per-day schedule until this software is done.” After the meeting, Jill and Tomas grumbled to Mike about not needing to be treated like children, but they agreed to work the hours Bill wanted. The team slipped the schedule two weeks, promising a feature-complete build by November 15. That allowed for 6 weeks of testing before the new rates went into effect in January. The team released its first build to testing 4 weeks later on November 1 and met to discuss a few remaining problem areas. (continued) 32 Case Study 3-1. Classic Mistakes, continued Tomas was working on report generation and had run into a roadblock. “The quote summary page includes a simple bar chart. I’m using a report generator that’s supposed to generate bar charts, but the only way it will generate them is on pages by themselves. We have a requirement from the sales group to put the text and bar charts on the same page. I’ve figured out that I can hack up a report with a bar chart by passing in the report text as a legend to the bar-chart object. It’s definitely a hack, but I can always go back and reimplement it more cleanly after the first release.” Mike responded, “I don’t see where the issue is. We have to get the product out, and we don’t have time to make the code perfect. Bill has made it crystal clear that there can’t be any more slips. Do the hack.” Chip reported that his communications code was 95-percent done and that it worked, but he still had a few more tests to run. Mike caught Jill and Tomas rolling their eyes, but he decided to ignore it. The team worked hard through November 15, including working almost all the way through the nights of the 14th and 15th, but they still didn’t make their November 15 release date. The team was exhausted, but on the morning of the 16th, it was Bill who felt sick. Stacy had called to tell him that development hadn’t released its feature-complete build the day before. Last week he had told the executive committee that the project was on track. Another project manager, Claire, had probed into the team’s progress, saying that she had heard that they weren’t making their scheduled releases to testing. Bill thought Claire was uptight, and he didn’t like her. He had assured her that his team was definitely on track to make their scheduled releases. Bill told Mike to get the team together, and when he did, they looked defeated. A month and a half of 60-hour weeks had taken their toll. Mike asked what time today they would have the build ready, but the only response he got was silence. “What are you telling me?” he said. “We are going to have the featurecomplete build today, aren’t we?” “Look, Mike,” Tomas said. “I can hand off my code today and call it ‘feature complete’, but I’ve probably got 3 weeks of cleanup work to do once I hand it off.” Mike asked what Tomas meant by “cleanup.” “I haven’t gotten the company logo to show up on every page, and I haven’t gotten the agent’s name and phone number to print on the bottom of every page. It’s little stuff like that. All of the important stuff works fine. I’m 99-percent done.” “I’m not exactly 100-percent done either,” Jill admitted. “My old group has been calling me for technical support a lot, and I’ve been spending a couple hours a day working for them. Plus, I had forgotten until just now that we were supposed to give the agents the ability to put their names and phone numbers on the reports. I haven’t implemented the dialogs to input that data yet, and I still have to do some of the other housekeeping dialogs, too. I didn’t think we needed them to make our ‘feature-complete’ milestone.” (continued) 33 Chapter 3: Classic Mistakes Case Study 3-1. Classic Mistakes, continued Now Mike started to feel sick, too. “If I’m hearing what I think I’m hearing, you’re telling me that we’re 3 weeks away from having feature-complete software. Is that right?” “Three weeks at least,” Jill said. The rest of the developers agreed. Mike went around the table one by one and asked the developers if they could completely finish their assignments in 3 weeks. One by one, the developers said that if they worked hard, they thought they could make it. Later that day, after a long, uncomfortable discussion, Mike and Bill agreed to slip the schedule 3 weeks to December 5, as long as the team promised to work 12-hour days instead of 10. Bill said he needed to show his boss that he was holding the development team’s feet to the fire. The revised schedule meant that they would have to test the code and train the field agents concurrently, but that was the only way they could hope to release the software by January 1. Stacy complained that that wouldn’t give QA enough time to test the software, but Bill overruled her. On December 5, the Giga-Quote team handed off the feature-complete GigaQuote program to testing before noon and left work early to take a longawaited break. They had worked almost constantly since September 1. Two days later, Stacy released the first bug list, and all hell broke loose. In two days, the testing group had identified more than 200 defects in the GigaQuote program, including 23 that were classified as Severity 1— “ Must Fix”— errors. “I don’t see any way that the software will be ready to release to the field agents by January 1,” she said. “It will probably take the test group that long just to write the regression test cases for the defects we’ve already discovered, and we’re finding new defects every hour.” Mike called a staff meeting for 8 o’clock the next morning. The developers were touchy. They said that although there were a few serious problems, a lot of the reported bugs weren’t really bugs at all but were misinterpretations of how the program was supposed to operate. Tomas pointed to bug #143 as an example. “The test report for bug #143 says that on the quote summary page, the bar chart is required to be on the right side of the page rather than the left. That’s hardly a Sev-1 error. This is typical of the way that testing overreacts to problems.” Mike distributed copies of the bug reports. He tasked the developers to review the bugs that testing had assigned to them and to estimate how much time it would take to fix each one. When the team met again that afternoon, the news wasn’t good. “Realistically, I would estimate that I have 2 weeks’ worth of work just to fix the bugs that have already been reported,” Sue said. “Plus I still have to finish the referential integrity checks in the database. I’ve got 4 weeks of work right now, total.” (continued) 34 Case Study 3-1. Classic Mistakes, continued Tomas had assigned bug #143 back to testing, changing its priority from Sev-1 to Sev-3— “ Cosmetic Change.” Testing had responded that Giga-Quote’s summary reports had to match similar reports generated by the mainframe policyrenewal program, which were also similar to preprinted marketing materials that the company had used for many years. The company’s 600 agents were accustomed to giving their sales pitches with the bar chart on the right, and it had to stay on the right. The bug stayed at Sev-1, and that created a problem. “Remember the hack I used to get the bar chart and the report to print on the same page in the first place?” Tomas asked. “To put the bar chart on the right, I will have to rewrite this particular report from scratch, which means that I will have to write my own low-level code to do the report formatting and graphics.” Mike cringed, and asked for a ballpark estimate of how long all that would take. Tomas said it would take at least 10 days, but he would have to look into it more before he would know for sure. Before he went home for the day, Mike told Stacy and Bill that the team would work through the holidays and have all the reported defects fixed by January 7. Bill said he had almost been expecting this one and approved a 4-week schedule slip before leaving for a monthlong Caribbean cruise he had been planning since the previous summer. Mike spent the next month holding the troops together. For 4 months, they had been working as hard as it was possible to work, and he didn’t think he could push them any harder. They were at the office 12 hours a day, but they were spending a lot of time reading magazines, paying bills, and talking on the phone. They seemed to make a point of getting irritable whenever he asked how long it would take to get the bug count down. For every bug they fixed, testing discovered two new ones. Bugs that should have taken minutes to fix had projectwide implications and took days instead. They soon realized there was no way they could fix all the defects by January 7. On January 7, Bill returned from his vacation, and Mike told him that the development team would need another 4 weeks. “Get serious,” Bill said. “I’ve got 600 field agents who are tired of getting jerked around by a bunch of computer guys. The executive committee is talking about canceling the project. You have to find a way to deliver the software within the next 2 weeks, no matter what.” Mike called a team meeting to discuss their options. He told them about Bill’s ultimatum and asked for a ballpark estimate of when they could release the product, first just in weeks, then in months. The team was silent. No one would hazard a guess about when they might finally release the product. Mike didn’t know what to tell Bill. (continued) 35 Chapter 3: Classic Mistakes Case Study 3-1. Classic Mistakes, continued After the meeting, Chip told Mike that he had accepted a contract with a different company that started February 3. Mike began to feel that it would be a relief if the project were canceled. Mike got Kip, the programmer who had been responsible for the mainframe side of the PC-to-mainframe communications, reassigned to help out on the project and assigned him to fix bugs in the PC communications code. After struggling with Chip’s code for a week, Kip realized that it contained some deep conceptual flaws that meant it could never work correctly. Kip was forced to redesign and reimplement the PC side of the PC-to-mainframe communications link. As Bill rambled on at an executive meeting in the middle of February, Claire finally decided that she had heard enough and called a “stop work” on the Giga-Quote program. She met with Mike on Friday. “This project is out of control,” she said. “I haven’t gotten a reliable schedule estimate from Bill for months. This was a 6-month project, and it’s now more than 3 months late with no end in sight. I’ve looked over the bug statistics, and the team isn’t closing the gap. You’re all working such long hours that you’re not even making progress anymore. I want you all to take the weekend off; then I want you to develop a detailed, step-by-step report that includes everything—and I do mean everything—that remains to be done on that project. I don’t want you to force-fit the project into an artificial schedule. If it’s going to take another 9 months, I want to know that. I want that report by end-of-work Wednesday. It doesn’t have to be fancy, but it does have to be complete.” The development team was glad to have the weekend off, and during the next week they attacked the detailed report with renewed energy. It was on Claire’s desk Wednesday. She had the report reviewed by Charles, a software engineering consultant who also reviewed the project’s bug statistics. Charles recommended that the team focus its efforts on a handful of error-prone modules, that it immediately institute design and code reviews for all bug fixes, and that the team start working regular hours so that they could get an accurate measure of how much effort was being expended on the project and how much would be needed to finish. Three weeks later, in the first week in March, the open-bug count had ticked down a notch for the first time. Team morale had ticked up a notch, and based on the steady progress being made, the consultant projected that the software could be delivered—fully tested and reliable—by May 15. Since Giga Safe’s semi-annual rate increase would go into effect July 1, Claire set the official launch date for June 1. Epilogue The Giga-Quote program was released to the field agents according to plan on June 1. Giga Safe’s field agents greeted it with a warm if somewhat skeptical reception. (continued) 36 3.2 Effect of Mistakes on a Development Schedule Case Study 3-1. Classic Mistakes, continued The Giga Safe Corporation showed its appreciation for the development team’s hard work by presenting each of the developers with a $250 bonus. A few weeks later, Tomas asked for an extended leave of absence, and Jill went to work for another company. The final Giga-Quote product was delivered in 13 months rather than 6, a schedule overrun of more than 100 percent. The developer effort, including overtime, consisted of 98 staff-months, which was a 170-percent overrun of the planned 36 staff-months. CROSS-REFERENCE For a table of ballpark estimates such as these for projects of various sizes, see Section 8.6, “Ballpark Schedule Estimates.” 3.2 The final product was determined to consist of about 40,000 nonblank, noncomment lines of code in C++, which was about 33 percent more than Mike’s seat-of-the-pants guess. As a product that was distributed to 600 inhouse sites, Giga-Quote was a hybrid between a business product and a shrink-wrap product. A product of its size and type should normally have been completed in 11.5 months with 71 staff-months of effort. The project had overshot both of those nominals. Effect of Mistakes on a Development Schedule Michael Jackson (the singer, not the computer scientist) sang that “One bad apple don’t spoil the whole bunch, baby.” That might be true for apples, but it isn’t true for software. One bad apple can spoil your whole project. A group of ITT researchers reviewed 44 projects in 9 countries to examine the impact of 13 productivity factors on productivity (Vosburgh et al. 1984). The factors included the use of modern programming practices, code difficulty, performance requirements, level of client participation in requirements specification, personnel experience, and several others. They divided each of the factors into categories that you would expect to be associated with low, medium, and high performance. For example, they divided the “modern programming practices” factor into low use, medium use, and high use. Figure 3-1 on the next page shows what the researchers found for the “use of modern programming practices” factor. The longer you study Figure 3-1, the more interesting it becomes. The general pattern it shows is representative of the findings for each of the productivity factors studied. The ITT researchers found that projects in the categories that they expected to have poor productivity did in fact have poor productivity, such as the narrow range shown in the Low category in Figure 3-1. But productivity in the high-performance categories varied greatly, such as the wide range shown in the High category in Figure 3-1. Productivity of projects in the High category varied from poor to excellent. 37 Chapter 3: Classic Mistakes Use of Modern Programming Practices (percentage of total system) CROSS-REFERENCE For more discussion of this specific graph, see Section 4.2, “Technical Fundamentals.” Percent of Nominal Productivity Low (0–25%) Medium (26–75%) High (76–100%) +200 Legend +100 Maximum 75th percentile 0 (average) Mean 25th percentile Minimum -100 Figure 3-1. Findings for “Use of Modern Programming Practices” factor (Vosburgh et al. 1984). Doing a few things right doesn’t guarantee rapid development. You also have to avoid doing anything wrong. CROSS-REFERENCE For more on the role that mistakes play in rapid development, see Section 2.1, “General Strategy for Rapid Development.” That projects that were expected to have poor productivity do in fact have poor productivity shouldn’t surprise you. But the finding that many of the projects expected to have excellent productivity actually have poor productivity just might be a surprise. What this graph and other graphs like it throughout the book show is that the use of any specific best practice is necessary but not sufficient for achieving maximum development speed. Even if you do a few things right, such as making high use of modern programming practices, you might still make a mistake that nullifies your productivity gains. When thinking about rapid development, it’s tempting to think that all you have to do is identify the root causes of slow development and eliminate them—and then you’ll have rapid development. The problem is that there aren’t just a handful of root causes of slow development, and in the end trying to identify the root causes of slow development isn’t very useful. It’s like asking, ‘What is the root cause of my not being able to run a 4-minute mile?’ Well, I’m too old. I weigh too much. I’m too out of shape. I’m not willing to train that hard. I don’t have a world-class coach or athletic facility. I wasn’t all that fast even when I was younger. The list goes on and on. When you talk about exceptional achievements, the reasons that people don’t rise to the top are simply too numerous to list. The Giga-Quote team in Case Study 3-1 made many of the mistakes that have plagued software developers since the earliest days of computing. The software-development road is 38 3.3 Classic Mistakes Enumerated mined with potholes, and the potholes you fall into partially determine how quickly or slowly you develop software. In software, one bad apple can spoil the whole bunch, baby. To slip into slow development, all you need to do is make one really big mistake; to achieve rapid development you need to avoid making any big mistakes. The next section lists the most common of those big mistakes. 3.3 CLASSIC MISTAKE Classic Mistakes Enumerated Some ineffective development practices have been chosen so often, by so many people, with such predictable, bad results that they deserve to be called “classic mistakes.” Most of the mistakes have a seductive appeal. Do you need to rescue a project that’s behind schedule? Add more people! Do you want to reduce your schedule? Schedule more aggressively! Is one of your key contributors aggravating the rest of the team? Wait until the end of the project to fire him! Do you have a rush project to complete? Take whatever developers are available right now and get started as soon as possible! Figure 3-2. The software project was riddled with mistakes, and all the king’s managers and technical leads couldn’t rescue the project for anyone’s sake. 39 Chapter 3: Classic Mistakes Developers, managers, and customers usually have good reasons for making the decisions they do, and the seductive appeal of the classic mistakes is part of the reason these mistakes have been made so often. But because they have been made so many times, their consequences have become easy to predict. And classic mistakes rarely produce the results that people hope for. This section enumerates about two dozen classic mistakes. I have personally seen each of these mistakes made at least once, and I’ve made more than a few of them myself. Many of them crop up in Case Study 3-1. The common denominator of these mistakes is that you won’t necessarily get rapid development if you avoid these mistakes, but you will definitely get slow development if you don’t avoid them. If some of these mistakes sound familiar, take heart—many other people have made them too. Once you understand their effect on development speed you can use this list to help with your project planning and risk management. Some of the more significant mistakes are discussed in their own sections in other parts of this book. Others are not discussed further. For ease of reference, the list has been divided along the development-speed dimensions of people, process, product, and technology. People Here are some of the people-related classic mistakes. CROSS-REFERENCE 1: Undermined motivation. Study after study has shown that motivation prob- For more on the uses and misuses of motivation, see Chapter 11, “Motivation.” ably has a larger effect on productivity and quality than any other factor (Boehm 1981). In Case Study 3-1, management took steps that undermined morale throughout the project—from giving a hokey pep talk at the beginning to requiring overtime in the middle, from going on a long vacation while the team worked through the holidays to providing end-of-project bonuses that worked out to less than a dollar per overtime hour at the end. 2: Weak personnel. After motivation, either the individual capabilities of the team members or their relationship as a team probably has the greatest influence on productivity (Boehm 1981, Lakhanpal 1993). Hiring from the bottom of the barrel will threaten a rapid-development effort. In the case study, personnel selections were made with an eye toward who could be hired fastest instead of who would get the most work done over the life of the project. That practice gets the project off to a quick start but doesn’t set it up for rapid completion. 40 CROSS-REFERENCE 3: Uncontrolled problem employees. Failure to deal with problem person- For more on creating effective teams, see Chapter 12, “Teamwork.” nel also threatens development speed. This is a common problem and has been well-understood at least since Gerald Weinberg published Psychology 3.3 Classic Mistakes Enumerated of Computer Programming in 1971. Failure to take action to deal with a problem employee is the most common complaint that team members have about their leaders (Larson and LaFasto 1989). In Case Study 3-1, the team knew that Chip was a bad apple, but the team lead didn’t do anything about it. The result—redoing all of Chip’s work—was predictable. CROSS-REFERENCE 4: Heroics. Some software developers place a high emphasis on project he- For more on heroics and commitment-based projects, see Section 2.5, “An Alternative RapidDevelopment Strategy,” “Commitment-Based Scheduling” in Section 8.5, and Chapter 34, “Signing Up.” roics (Bach 1995). But I think that they do more harm than good. In the case study, mid-level management placed a higher premium on can-do attitudes than on steady and consistent progress and meaningful progress reporting. The result was a pattern of scheduling brinkmanship in which impending schedule slips weren’t detected, acknowledged, or reported up the management chain until the last minute. A small development team and its immediate management held an entire company hostage because they wouldn’t admit that they were having trouble meeting their schedule. An emphasis on heroics encourages extreme risk taking and discourages cooperation among the many stakeholders in the software-development process. Some managers encourage heroic behavior when they focus too strongly on can-do attitudes. By elevating can-do attitudes above accurate-and-sometimes-gloomy status reporting, such project managers undercut their ability to take corrective action. They don’t even know they need to take corrective action until the damage has been done. As Tom DeMarco says, can-do attitudes escalate minor setbacks into true disasters (DeMarco 1995). For alternative means of rescuing a late project, see Chapter 16, “Project Recovery.” 5: Adding people to a late project. This is perhaps the most classic of the classic mistakes. When a project is behind, adding people can take more productivity away from existing team members than it adds through new ones. Fred Brooks likened adding people to a late project to pouring gasoline on a fire (Brooks 1975). CROSS-REFERENCE CROSS-REFERENCE 6: Noisy, crowded offices. Most developers rate their working conditions as For more on the effects of the physical environment on productivity, see Chapter 30, “Productivity Environments.” unsatisfactory. About 60 percent report that they are neither sufficiently quiet nor sufficiently private (DeMarco and Lister 1987). Workers who occupy quiet, private offices tend to perform significantly better than workers who occupy noisy, crowded work bays or cubicles. Noisy, crowded work environments lengthen development schedules. CROSS-REFERENCE 7: Friction between developers and customers. Friction between developers and customers can arise in several ways. Customers may feel that developers are not cooperative when they refuse to sign up for the development schedule that the customers want or when they fail to deliver on their promises. Developers may feel that customers are unreasonably insisting on unrealistic schedules or requirements changes after the requirements have been baselined. There might simply be personality conflicts between the two groups. For more on effective customer relations, see Chapter 10, “CustomerOriented Development.” 41 Chapter 3: Classic Mistakes The primary effect of this friction is poor communication, and the secondary effects of poor communication include poorly understood requirements, poor user-interface design, and, in the worst case, customers’ refusing to accept the completed product. On average, friction between customers and software developers becomes so severe that both parties consider canceling the project (Jones 1994). Such friction is time-consuming to overcome, and it distracts both customers and developers from the real work of the project. CROSS-REFERENCE 8: Unrealistic expectations. One of the most common causes of friction be- For more on setting expectations, see Section 10.3, “Managing Customer Expectations.” tween developers and their customers or managers is unrealistic expectations. In Case Study 3-1, Bill had no sound reason to think that the Giga-Quote program could be developed in 6 months, but that’s when the company’s executive committee wanted it done. Mike’s inability to correct that unrealistic expectation was a major source of problems. In other cases, project managers or developers ask for trouble by getting funding based on overly optimistic schedule estimates. Sometimes they promise a pie-in-the-sky feature set. Although unrealistic expectations do not in themselves lengthen development schedules, they contribute to the perception that development schedules are too long, and that can be almost as bad. A Standish Group survey listed realistic expectations as one of the top five factors needed to ensure the success of an in-house business-software project (Standish Group 1994). 9: Lack of effective project sponsorship. High-level project sponsorship is necessary to support many aspects of rapid development, including realistic planning, change control, and the introduction of new development practices. Without an effective executive sponsor, other high-level personnel in your organization can force you to accept unrealistic deadlines or make changes that undermine your project. Australian consultant Rob Thomsett argues that lack of an effective executive sponsor virtually guarantees project failure (Thomsett 1995). 10: Lack of stakeholder buy-in. All the major players in a software-development effort must buy in to the project. That includes the executive sponsor, team leader, team members, marketing staff, end-users, customers, and anyone else who has a stake in it. The close cooperation that occurs only when you have complete buy-in from all stakeholders allows for precise coordination of a rapid-development effort that is impossible to attain without good buy-in. 11: Lack of user input. The Standish Group survey found that the number one reason that IS projects succeed is because of user involvement (Standish Group 1994). Projects without early end-user involvement risk misunderstanding the projects’ requirements and are vulnerable to time-consuming feature creep later in the project. 42 3.3 Classic Mistakes Enumerated CROSS-REFERENCE 12: Politics placed over substance. Larry Constantine reported on four teams For more on healthy politics, see Section 10.3, “Managing Customer Expectations.” that had four different kinds of political orientations (Constantine 1995a). “Politicians” specialized in “managing up,” concentrating on relationships with their managers. “Researchers” concentrated on scouting out and gathering information. “Isolationists” kept to themselves, creating project boundaries that they kept closed to non-team members. “Generalists” did a little bit of everything: they tended their relationships with their managers, performed research and scouting activities, and coordinated with other teams through the course of their normal workflow. Constantine reported that initially the political and generalist teams were both well regarded by top management. But after a year and a half, the political team was ranked dead last. Putting politics over results is fatal to speed-oriented development. 13: Wishful thinking. I am amazed at how many problems in software de- velopment boil down to wishful thinking. How many times have you heard statements like these from different people: “None of the team members really believed that they could complete the project according to the schedule they were given, but they thought that maybe if everyone worked hard, and nothing went wrong, and they got a few lucky breaks, they just might be able to pull it off.” “Our team hasn’t done very much work to coordinate the interfaces among the different parts of the product, but we’ve all been in good communication about other things, and the interfaces are relatively simple, so it’ll probably take only a day or two to shake out the bugs.” “We know that we went with the low-ball contractor on the database subsystem, and it was hard to see how they were going to complete the work with the staffing levels they specified in their proposal. They didn’t have as much experience as some of the other contractors, but maybe they can make up in energy what they lack in experience. They’ll probably deliver on time.” “We don’t need to show the final round of changes to the prototype to the customer. I’m sure we know what they want by now.” “The team is saying that it will take an extraordinary effort to meet the deadline, and they missed their first milestone by a few days, but I think they can bring this one in on time.” Wishful thinking isn’t just optimism. It’s closing your eyes and hoping something works when you have no reasonable basis for thinking it will. Wishful thinking at the beginning of a project leads to big blowups at the end of a project. It undermines meaningful planning and may be at the root of more software problems than all other causes combined. 43 Chapter 3: Classic Mistakes Process Process-related mistakes slow down projects because they squander people’s talents and efforts. Here are some of the worst process-related mistakes. CROSS-REFERENCE For more on unrealistic schedules, see Section 9.1, “Overly Optimistic Scheduling.” CROSS-REFERENCE For more on risk management, see Chapter 5, “Risk Management.” 14: Overly optimistic schedules. The challenges faced by someone building a 3-month application are quite different from the challenges faced by someone building a 1-year application. Setting an overly optimistic schedule sets a project up for failure by underscoping the project, undermining effective planning, and abbreviating critical upstream development activities such as requirements analysis and design. It also puts excessive pressure on developers, which hurts long-term developer morale and productivity. This was a major source of problems in Case Study 3-1. 15: Insufficient risk management. Some mistakes are not common enough to be considered classic. Those are called “risks.” As with the classic mistakes, if you don’t actively manage risks, only one thing has to go wrong to change your project from a rapid-development project to a slow-development one. The failure to manage such unique risks is a classic mistake. CROSS-REFERENCE 16: Contractor failure. Companies sometimes contract out pieces of a project For more on contractors, see Chapter 28, “Outsourcing.” when they are too rushed to do the work in-house. But contractors frequently deliver work that’s late, that’s of unacceptably low quality, or that fails to meet specifications (Boehm 1989). Risks such as unstable requirements or illdefined interfaces can be magnified when you bring a contractor into the picture. If the contractor relationship isn’t managed carefully, the use of contractors can slow a project down rather than speed it up. CROSS-REFERENCE 17: Insufficient planning. If you don’t plan to achieve rapid development, you can’t expect to achieve it. For more on planning, see “Planning” in Section 4.1. CROSS-REFERENCE For more on planning under pressure, see Section 9.2, “Beating Schedule Pressure,” and Chapter 16, “Project Recovery.” 18: Abandonment of planning under pressure. Project teams make plans and then routinely abandon them when they run into schedule trouble (Humphrey 1989). The problem isn’t so much in abandoning the plan as in failing to create a substitute, and then falling into code-and-fix mode instead. In Case Study 3-1, the team abandoned its plan after it missed its first delivery, and that’s typical. The work after that point was uncoordinated and awkward—to the point that Jill even started working on a project for her old group part of the time and no one even knew it. 19: Wasted time during the fuzzy front end. The “fuzzy front end” is the time before the project starts, the time normally spent in the approval and budgeting process. It’s not uncommon for a project to spend months or years in the fuzzy front end and then to come out of the gates with an aggressive schedule. It’s much easier and cheaper and less risky to save a few weeks or months in the fuzzy front end than it is to compress a development schedule by the same amount. 44 3.3 Classic Mistakes Enumerated CROSS-REFERENCE For more on shortchanging upstream activities, see “Effects of Overly Optimistic Schedules” in Section 9.1. HARD DATA 20: Shortchanged upstream activities. Projects that are in a hurry try to cut out nonessential activities, and since requirements analysis, architecture, and design don’t directly produce code, they are easy targets. On one disastrous project that I took over, I asked to see the design. The team lead told me, “We didn’t have time to do a design.” The results of this mistake—also known as “jumping into coding”— are all too predictable. In the case study, a design hack in the bar-chart report was substituted for quality design work. Before the product could be released, the hack work had to be thrown out and the higher-quality work had to be done anyway. Projects that skimp on upstream activities typically have to do the same work downstream at anywhere from 10 to 100 times the cost of doing it properly in the first place (Fagan 1976; Boehm and Papaccio 1988). If you can’t find the 5 hours to do the job right the first time, where are you going to find the 50 hours to do it right later? 21: Inadequate design. A special case of shortchanging upstream activities is inadequate design. Rush projects undermine design by not allocating enough time for it and by creating a pressure cooker environment that makes thoughtful consideration of design alternatives difficult. The design emphasis is on expediency rather than quality, so you tend to need several ultimately time-consuming design cycles before you can finally complete the system. CROSS-REFERENCE 22: Shortchanged quality assurance. Projects that are in a hurry often cut For more on quality assurance, see Section 4.3, “Quality-Assurance Fundamentals.” corners by eliminating design and code reviews, eliminating test planning, and performing only perfunctory testing. In the case study, design reviews and code reviews were given short shrift in order to achieve a perceived schedule advantage. As it turned out, when the project reached its featurecomplete milestone it was still too buggy to release for 5 more months. This result is typical. Shortcutting 1 day of QA activity early in the project is likely to cost you from 3 to 10 days of activity downstream (Jones 1994). This shortcut undermines development speed. HARD DATA CROSS-REFERENCE For more on management controls, see “Tracking” in Section 4.1 and Chapter 27, “Miniature Milestones.” CROSS-REFERENCE For more on premature convergence, see “Premature convergence” in Section 9.1. 23: Insufficient management controls. In the case study, few management controls were in place to provide timely warnings of impending schedule slips, and the few controls that were in place at the beginning were abandoned once the project ran into trouble. Before you can keep a project on track, you have to be able to tell whether it’s on track in the first place. 24: Premature or overly frequent convergence. Shortly before a product is scheduled to be released, there is a push to prepare the product for release— improve the product’s performance, print final documentation, incorporate final help-system hooks, polish the installation program, stub out functionality that’s not going to be ready on time, and so on. On rush projects, there is a tendency to force convergence early. Since it’s not possible to force the product to converge when desired, some rapid-development projects try to 45 Chapter 3: Classic Mistakes force convergence a half dozen times or more before they finally succeed. The extra convergence attempts don’t benefit the product. They just waste time and prolong the schedule. CROSS-REFERENCE 25: Omitting necessary tasks from estimates. If people don’t keep careful For a list of commonly omitted tasks, see “Don’t omit common tasks” in Section 8.3. records of previous projects, they forget about the less visible tasks, but those tasks add up. Omitted effort often adds about 20 to 30 percent to a development schedule (van Genuchten 1991). 26: Planning to catch up later. One kind of reestimation is responding inap- CROSS-REFERENCE For more on reestimation, see “Recalibration” in Section 8.7. propriately to a schedule slip. If you’re working on a 6-month project, and it takes you 3 months to meet your 2-month milestone, what do you do? Many projects simply plan to catch up later, but they never do. You learn more about the product as you build it, including more about what it will take to build it. That learning needs to be reflected in the reestimated schedule. Another kind of reestimation mistake arises from product changes. If the product you’re building changes, the amount of time you need to build it changes too. In Case Study 3-1, major requirements changed between the original proposal and the project start without any corresponding reestimation of schedule or resources. Piling on new features without adjusting the schedule guarantees that you will miss your deadline. CROSS-REFERENCE For more on the Code-andFix approach, see Section 7.2, “Code-and-Fix.” 27: Code-like-hell programming. Some organizations think that fast, loose, all-as-you-go coding is a route to rapid development. If the developers are sufficiently motivated, they reason, they can overcome any obstacles. For reasons that will become clear throughout this book, this is far from the truth. This approach is sometimes presented as an “entrepreneurial” approach to software development, but it is really just a cover for the old Code-and-Fix paradigm combined with an ambitious schedule, and that combination almost never works. It’s an example of two wrongs not making a right. Product Here are classic mistakes related to the way the product is defined. 28: Requirements gold-plating. Some projects have more requirements than they need, right from the beginning. Performance is stated as a requirement more often than it needs to be, and that can unnecessarily lengthen a software schedule. Users tend to be less interested in complex features than marketing and development are, and complex features add disproportionately to a development schedule. CROSS-REFERENCE 29: Feature creep. Even if you’re successful at avoiding requirements gold- For more on feature creep, see Chapter 14, “Feature-Set Control.” plating, the average project experiences about a 25-percent change in requirements over its lifetime (Jones 1994). Such a change can produce at least 46 3.3 Classic Mistakes Enumerated a 25-percent addition to the software schedule, which can be fatal to a rapiddevelopment project. CROSS-REFERENCE 30: Developer gold-plating. Developers are fascinated by new technology For an example of the way that developer gold-plating can occur even accidentally, see “Unclear or Impossible Goals” in Section 14.2. and are sometimes anxious to try out new features of their language or environment or to create their own implementation of a slick feature they saw in another product—whether or not it’s required in their product. The effort required to design, implement, test, document, and support features that are not required lengthens the schedule. 31: Push-me, pull-me negotiation. One bizarre negotiating ploy occurs when a manager approves a schedule slip on a project that’s progressing slower than expected and then adds completely new tasks after the schedule change. The underlying reason for this is hard to fathom, because the manager who approves the schedule slip is implicitly acknowledging that the schedule was in error. But once the schedule has been corrected, the same person takes explicit action to make it wrong again. This can’t help but undermine the schedule. 32: Research-oriented development. Seymour Cray, the designer of the Cray supercomputers, says that he does not attempt to exceed engineering limits in more than two areas at a time because the risk of failure is too high (Gilb 1988). Many software projects could learn a lesson from Cray. If your project strains the limits of computer science by requiring the creation of new algorithms or new computing practices, you’re not doing software development; you’re doing software research. Software-development schedules are reasonably predictable; software research schedules are not even theoretically predictable. If you have product goals that push the state of the art—algorithms, speed, memory usage, and so on—you should assume that your scheduling is highly speculative. If you’re pushing the state of the art and you have any other weaknesses in your project—personnel shortages, personnel weaknesses, vague requirements, unstable interfaces with outside contractors—you can throw predictable scheduling out the window. If you want to advance the state of the art, by all means, do it. But don’t expect to do it rapidly! Technology The remaining classic mistakes have to do with the use and misuse of modern technology. CROSS-REFERENCE For more on the silver-bullet syndrome, see Section 15.5, “Silver-Bullet Syndrome.” 33: Silver-bullet syndrome. In the case study, there was too much reliance on the advertised benefits of previously unused technologies (report generator, object-oriented design, and C++) and too little information about how 47 Chapter 3: Classic Mistakes well they would do in this particular development environment. When project teams latch onto a single new practice, new technology, or rigid process and expect it to solve their schedule problems, they are inevitably disappointed (Jones 1994). CROSS-REFERENCE 34: Overestimated savings from new tools or methods. Organizations seldom For more on estimating savings from productivity tools, see “How Much Schedule Reduction to Expect” in Section 15.4. improve their productivity in giant leaps, no matter how many new tools or methods they adopt or how good they are. Benefits of new practices are partially offset by the learning curves associated with them, and learning to use new practices to their maximum advantage takes time. New practices also entail new risks, which you’re likely to discover only by using them. You are more likely to experience slow, steady improvement on the order of a few percent per project than you are to experience dramatic gains. The team in Case Study 3-1 should have planned on, at most, a 10-percent gain in productivity from the use of the new technologies instead of assuming that they would nearly double their productivity. CROSS-REFERENCE A special case of overestimated savings arises when projects reuse code from previous projects. This kind of reuse can be a very effective approach, but the time savings is rarely as dramatic as expected. For more on reuse, see Chapter 33, “Reuse.” 35: Switching tools in the middle of a project. This is an old standby that hardly ever works. Sometimes it can make sense to upgrade incrementally within the same product line, from version 3 to version 3.1 or sometimes even to version 4. But the learning curve, rework, and inevitable mistakes made with a totally new tool usually cancel out any benefit when you’re in the middle of a project. CROSS-REFERENCE For more on source-code control, see “Software Configuration Management” in Section 4.2. 36: Lack of automated source-code control. Failure to use automated sourcecode control exposes projects to needless risks. Without it, if two developers are working on the same part of the program, they have to coordinate their work manually. They might agree to put the latest versions of each file into a master directory and to check with each other before copying files into that directory. But someone invariably overwrites someone else’s work. People develop new code to out-of-date interfaces and then have to redesign their code when they discover that they were using the wrong version of the interface. Users report defects that you can’t reproduce because you have no way to re-create the build they were using. On average, source code changes at a rate of about 10 percent per month, and manual source-code control can’t keep up (Jones 1994). Table 3-1 contains a complete list of classic mistakes. 48 3.4 Escape from Gilligan’s Island Table 3-1. Summary of Classic Mistakes People-Related Mistakes Process-Related Mistakes Product-Related Mistakes Technology-Related Mistakes 1. Undermined motivation 2. Weak personnel 3. Uncontrolled problem employees 4. Heroics 5. Adding people to a late project 6. Noisy, crowded offices 7. Friction between developers and customers 8. Unrealistic expectations 9. Lack of effective project sponsorship 10. Lack of stakeholder buy-in 11. Lack of user input 12. Politics placed over substance 13. Wishful thinking 14. Overly optimistic schedules 15. Insufficient risk management 16. Contractor failure 17. Insufficient planning 18. Abandonment of planning under pressure 19. Wasted time during the fuzzy front end 20. Shortchanged upstream activities 21. Inadequate design 22. Shortchanged quality assurance 23. Insufficient management controls 24. Premature or overly frequent convergence 25. Omitting necessary tasks from estimates 26. Planning to catch up later 27. Code-like-hell programming 28. Requirements gold-plating 29. Feature creep 30. Developer gold-plating 31. Push-me, pull-me negotiation 32. Research-oriented development 33. Silver-bullet syndrome 34. Overestimated savings from new tools or methods 35. Switching tools in the middle of a project 36. Lack of automated source-code control 3.4 Escape from Gilligan’s Island A complete list of classic mistakes would go on for pages more, but those presented are the most common and the most serious. As Seattle University’s David Umphress points out, watching most organizations attempt to avoid these classic mistakes seems like watching reruns of Gilligan’s Island. At the beginning of each episode, Gilligan, the Skipper, or the Professor comes up with a cockamamie scheme to get off the island. The scheme seems as though it’s going to work for a while, but as the episode unfolds, something goes wrong, and by the end of the episode the castaways find themselves right back where they started—stuck on the island. 49 Chapter 3: Classic Mistakes Similarly, most companies at the end of each project find that they have made yet another classic mistake and that they have delivered yet another project behind schedule or over budget or both. Your Own List of Worst Practices Be aware of the classic mistakes. Create lists of “worst practices” to avoid on future projects. Start with the list in this chapter. Add to the list by conducting project postmortems to learn from your team’s mistakes. Encourage other projects within your organization to conduct postmortems so that you can learn from their mistakes. Exchange war stories with your colleagues in other organizations, and learn from their experiences. Display your list of mistakes prominently so that people will see it and learn not to make the same mistakes yet another time. Further Reading Although a few books discuss coding mistakes, there are no books that I know of that describe classic mistakes related to development schedules. Further reading on related topics is provided throughout the rest of this book. 50 Index Special Characters 3GLs (third-generation languages), 365 switching from a, 360–62 4GLs (fourth-generation languages), 365 switching to a, 360–62 90-90 problem, 580 A Aburdene, Patricia, 258 accountability, team structure and, 302 accuracy of data, measurement and, 476 precision vs., 173 achievement motivation, 255 adherence to plan, overly optimistic scheduling and, 211 Advances in Software Inspections, 77 air-traffic-control workstation software, 320 Albrecht, Allan J., 174, 187 all-out rapid development, 20 Amish barn raising team, 273–74, 277, 279, 280, 286, 287, 292 Andriole, Steven J., 573 Applied Software Measurement (Jones), 204, 479 appreciation (recognition) failure of teams and lack of, 290 lack of, as morale killer, 267–68 motivation and gestures of, 262–63 “Article of Understanding” (with customers), 325 Art of Software Testing, The (Myers), 76 Assessment and Control of Software Risks (Jones), 106, 161, 204, 343, 369, 608 assuming risks, 98 Auerbach, Red, 51 August, Judy, 452, 455, 461, 463 Australian Computer Society Software Challenge, 61 automatic programming, 366–67 “Automatic Programming: Myths and Prospects” (Rich and Waters), 366 autonomy, sense of of team members, 286–87 work itself as motivation and, 259 avoiding risks, 97. See also risk management B Babich, W., 68 Bach, James, 41 Baker, F. Terry, 305, 306 ballpark schedule estimates, 185–97 efficient schedules, 192–94 first thing to do with, 196–97 nominal schedules, 194–96 shortest possible schedules, 188–92 tables of, 186–88 barn raising. See Amish barn raising team baseline report, 475 Basili, Victor R., 12, 14, 74, 360, 469, 475, 556 Baumert, John, 57 Bayer, Sam, 287 Becoming a Technical Leader (Weinberg), 271 Belbin, Meredith, 284 Belz, F. C., 161 Bentley, Jon, 68, 436 Bersoff, Edward H., 68 best practices in rapid development, 390–401. See also specific practices chapters in this book on, 390 organization of, 392–94 sections in, 394 reasons for excluding candidates for, 395–96 summary of best-practice candidates, 394–400 summary of evaluations of, 400–401 summary tables on “chance of first-time success” entry, 393 “chance of long-term success” entry, 393–94 625 Index best practices in rapid development, summary tables on, continued “effect on schedule risk” entry, 393 “improvement in progress visibility” entry, 393 “major interactions and trade-offs” entry, 394 “major risks” entry, 394 “potential reduction from nominal schedule” entry, 392 sample table, 390–91 Bieman, James M., 431, 435, 436, 438, 439, 441, 443, 571, 597 Biggerstaff, Ted, 535 Bock, Douglas, 360 Boddie, John, 314, 338, 339, 388, 603, 604 Boehm, Barry W., 12–13, 40, 44, 45, 62, 71–73, 84, 94, 106, 123, 124, 161, 181–83, 192, 203, 212, 217, 249, 257, 262, 263, 276, 277, 331, 335, 417, 462, 484, 511, 518, 559, 568, 585, 601 Boeing, 256 Booch, Grady, 67 Brady, Shelia, 543, 603 breaking the build, penalty for, 410 Brodman, Judith G., 474 Brooks, Frederick P., Jr., 41, 207, 222, 305, 311, 317, 348, 349, 365, 369, 378, 411, 418, 534 Brooks, Ruven, 348 budget, unrealistic expectations about, evolutionary prototyping and, 436 buffers, project overrun and, 94 Bugsy (film), 344 build group, establishing a, 408–9 building a program, 406. See also daily build and smoke test Building Quality Software (Glass), 76 Burlton, Roger, 55, 428 burnout overly optimistic scheduling and, 217 signing up and, 544 business software, defined, 22 business-team structure, 304 Butler Cox Foundation, 293 buy-in by stakeholders, lack of, 42 Bylinsky, Gene, 207 626 C cancelation of projects, 319 Capability Maturity Model, The (Carnegie Mellon University/Software Engineering Institute), 28, 59 Card, David N., 12, 74, 249, 531, 532 Carnegie Mellon University, 28 Carroll, Paul B., 23, 343, 488 case-based estimating, 181 case studies change management, effective, 342–43 classic mistakes, 29–37 estimation careful project estimation, 200–202 seat-of-the-pants, 164 fundamentals, lack of, 52–55 fuzzy front end, 124–25 lifecycle models effective selection of lifecycle model, 159–61 ineffective selection of, 134–36 motivation disheartening lunch with the boss, 250–51 highly motivational environment, 270–71 productivity tools effective tool use, 368–69 ineffective tool use, 346–47 recovery of projects successful project recovery, 385–87 unsuccessful project recovery, 372–73 requirements, 234–36, 246 risk management contractor, lack of, 82–83 systematic, 103–5 schedule negotiation, successful, 229–30 strategy for rapid development clear, 25–27 lack of a clear, 6 team structure good match between project objectives and team structure, 315–16 mismatch between project objectives and team structure, 297–99 Index case studies, continued teamwork high-performance team, 277–78 poor group dynamics, 274–75 selection of team members, 282–83 CASE tools, 363, 364, 366 Caswell, Deborah L., 478 categories, estimation by, 178 “chance of first-time success” entry, in best-practice summary tables, 393 “chance of long-term success” entry, in best-practice summary tables, 393–94 change designing for (see designing for change) managing a high-performance team and, 288 change analysis, 338–39 change-control boards, 339–41, 403 Cherlin, M., 293 chief-programmer team structure, 305 Chow, Tsun S., 76 classic mistakes, 8, 29–50 awareness of, 50 case study, 29–37 effect on development schedule, 37–39 people-related, 40–43 process-related, 44–46 product-related, 46–47 recovery of projects and, 379–80 technology-related, 47–48 Clements, Paul C., 423 CNA Insurance Company, 461 Coad, Peter, 67 COCOMO model, 182 code-and-fix lifecycle model, 140 strengths and weaknesses of, 156 Code Complete (McConnell), xvi, 67, 75, 343, 414, 423, 432, 524 code-like-hell approach (commitment-based approach), 22–25, 184 as classic mistake, 46 disadvantages of, 23–24 successes and advantages of, 22–23 this book’s approach compared to, 24–25 code reading, 74 cohesiveness of teams, performance and, 277 Comer, Ed, 531, 532 commercial software, defined, 22 commitment to productivity tool selection, 358 to the team, 285 commitment-based approach. See code-like-hell approach (commitment-based approach) communication failure of teams and, 290 among team members, 286 team structure and, 302 Communications of the ACM, 317 Complete Guide to Software Testing, The (Hetzel), 76 “Component Software” (Udell), 538 Confessions of a Used Program Salesman (Tracz), 538 confidence factors, estimation and, 181–82 Connell, John, 443 Constantine, Larry L., 28, 43, 61, 67, 294, 296, 317 Constantine on Peopleware (Constantine), 28, 317 construction customer-oriented, 242–43 as development fundamental, 64–65 further reading on, 67–68 Conte, S. D., 479 contractors. See also outsourcing failure of, as classic mistake, 44 risks associated with, 89 contracts, outsourcing, 493, 495, 498–99 control estimation vs., 170 evolutionary prototyping and diminished, 436–37 loss of outsourcing and, 500 signing up and, 544 Controlling Software Projects (DeMarco), 59, 131, 203 convergence, premature or overly frequent as classic mistake, 45–46 overly optimistic scheduling and, 213–15 627 Index cost lowest, as rapid development look-alike, 114 trade-offs among product, schedule, and, 126 Costello, Scott H., 207, 231 cost ratios, 72 Cox, Charles, 468, 474 Cray, Seymour, 47 “Creating New Software Was Agonizing Task for Mitch Kapor Firm” (Carroll), 343 creativity, overly optimistic scheduling and, 216–17 creativity team, 300 crisis, miniature milestones in response to a, 484 crowded work environments, 41 Crunch Mode (Boddie), 388 Curtis, Bill, 12, 213, 249 custom applications, productivity tools and, 350 customer-oriented development, 16, 233–47 importance of customers to rapid development, 236–38 customer-oriented practices, 238–43 construction, 242–43 design, 242 further reading on, 247 planning, 239 requirements analysis, 239–42 requirements practices, 338 customer relations daily build and smoke test and, 407 overly optimistic scheduling and, 213 Theory-W management and, 561 customers developers’ understanding with, 325 expectations of, 243–45 unrealistic, 120–21, 243, 245 friction between developers and, 41–42 identifying, 116, 239 risks associated with, 88–89 customer-satisfaction surveys, 241, 242 Cusumano, Michael, 116, 255, 270, 272, 290, 324, 327, 341, 407, 414, 432, 536 cycles, short release, 339 628 D daily build and smoke test, 405–14 bottom line on, 413–14 further reading on, 414 interactions with other practices, 413 managing risks of, 412–13 side effects of, 413 summary of, 405 time savings produced by, 406–7 using, 407–12 adding code to the build, 409 broken builds, checking for, 407–8 build and smoke testing even under pressure, 411 build group, establishing a, 408–9 daily building of the product, 407 daily smoke test, 408 holding area for code that’s to be added to the build, 409–10 keys to success in, 414 kinds of projects that can use this process, 411–12 penalty for breaking the build, 410 releasing builds in the morning, 410–11 requiring developers to smoke test before adding code to system, 409 Davis, Alan M., 68, 366 DBMS-oriented applications, productivity tools and, 350 deadlocks, breaking, in principled negotiations, 227 Debugging the Development Process (Maguire), 28, 231, 608 Dedene, Guido, 493, 496, 502 defect rates, 69–71 of error-prone modules, 72 defects, preventing, 71 DeGrace, Peter, 143, 161 DeMarco, Tom, 12, 28, 41, 59, 118, 131, 174, 186, 203, 217, 231, 249, 271, 276, 288, 290, 293–95, 317, 334, 506, 513, 543, 603, 608 Deming, W. Edwards, 265 demonstration prototyping, 591 Index design. See also designing for change customer-oriented, 242 as development fundamental, 62–63 evolutionary prototyping and poor, 438–39 further reading on, 66–67 inadequate, as classic mistake, 45 risks associated with, 91 “Design and Code Inspections to Reduce Errors in Program Development” (Fagan), 77 designing for change, 415–23 bottom line on, 422 further reading on, 423 interactions with other practices, 422 managing the risks of, 421–22 side effects of, 422 summary of, 415 using, 416–21 change plans, 419 defining families of programs, 419–21 identifying areas likely to change, 416–17 information hiding, 417–19 keys to success in, 423 object-oriented design, 421 Design Objectives and Criteria (Boeing), 256 design-to-schedule lifecycle model, 149–50 strengths and weaknesses of, 157 design-to-tools lifecycle model, 152–53 strengths and weaknesses of, 157 developers as bad negotiators, 221 estimates prepared by, 178 feedback on measurement data to, 474–75 friction between customers and, 41–42 gold-plating by, 47 hygiene factors for, 265–66 increased attachment to specific features, 328 in JAD sessions, 454 miniature milestones of, 485 motivations of, 251–54 not involving, in decisions that affect them, as morale killer, 268–69 overly optimistic scheduling and relationships between managers and, 217 developers, continued recovery of projects and, 379 reuse and, 535 Developing Products in Half the Time (Smith and Reinertsen), 131 development environment, risks associated with, 88 development speed. See speed of development De Vreese, Jean-Pierre, 493, 496, 502 diagnosis of defects, daily build and smoke test and, 406 Discipline for Software Engineering, A (Humphrey), 28 documentation graphically oriented, 329 reuse and, 534 Dreger, Brian, 204 drop-dead date, fixed, 114–15 Dunn, Robert H., 62 Dunsmore, H. E., 479 Dupont, 293, 586 Dyer, William G., 296 Dynamics of Software Development (McCarthy), 317, 388, 414, 432 E “effect on schedule risk” entry, in best-practice summary tables, 393 efficient development, 19–20 customer relationships and, 237 tilted toward best schedule, 20 efficient schedules ballpark estimates for, 192–94 shortest possible schedules and, 193 effort estimation, 182–83 Elements of Friendly Software Design, The (Heckel), 317 embedded software, 22 Emery, Fred, 287, 288 Emery, Merrelyn, 287, 288 employees. See personnel (employees; staff) empowerment, sense of, team members’, 287 encapsulation, planned reuse and, 534 629 Index end-users. See also customer-oriented development; customers lack of involvement, 42 risks associated with, 88 Erikson, W. J., 12, 217, 249 Ernst and Young, 61 error-prone modules, 72 error rates. See defect rates estimate-convergence graph, 168 estimates (estimation), 163–204 accuracy vs. precision and, 173 allowing time planning for, 178 ballpark schedule, 185–97 efficient schedules, 192–94 first thing to do with, 196–97 nominal schedules, 194–96 shortest possible schedules, 188–92 tables of, 186–88 case-based, 181 case studies careful project estimation, 200–202 seat-of-the-pants estimation, 164 by categories, 178 changing practices of, 179 commitment-based scheduling and, 184 confidence factors and, 181–82 construction example, 166 control vs., 170 convergence between reality and, 171–72 cooperation and, 171 developer-based, 178 as development fundamental, 55 effort, 182–83 function-point, 174–77 further reading on, 203–4 as gradual refinement process, 165–69 at low level of detail, 178 off-the-cuff, 177 omission of common tasks and, 179 omitting necessary tasks from, as classic mistake, 46 overview of the process of, 173–74 padding of, 184 630 estimates (estimation), continued plus-or-minus style, 180 presentation styles for, 179–82 previous projects as basis for, 178 principled negotiation method and, 227–29 probability of loss, 93 ranges for, 180 recalibration of schedules, 199–200 reestimation mistakes, 46 refinement of, 197–200 risk quantification and, 180–81 schedule, 183–85 size, 174–82 definition of size, 175 function-point estimation, 174 size of loss, 93 software tools for, 179 staged delivery and, 552 story of, 165–72, 221 tips on, 177–79 using several different techniques for, 179 by walk-through, 178 evolutionary delivery, 151–52, 425–32 further reading on, 432 interactions with other practices, 431 managing the risks of, 429 side effects of, 430 strengths and weaknesses of, 157 summary of, 425 using, 427–29 keys to success in, 432 release order, 428–29 when to use, 429 evolutionary prototyping lifecycle model, 147–48, 433–43 bottom line on, 441–42 evolutionary delivery and, 425, 426, 428–31 further reading on, 443 interactions with other practices, 441 managing the risks of, 435–40 diminished project control, 436–37 feature creep, 439 inefficient use of prototyping time, 440 Index evolutionary prototyping lifecycle model, managing the risks of, continued poor design, 438–39 poor end-user or customer feedback, 437 poor maintainability, 439 poor product performance, 437 unrealistic performance expectations, 437 unrealistic schedule and budget expectations, 436 side effects of, 440 strengths and weaknesses of, 156 summary of, 433–34 user-interface prototyping and, 591–92 using, 435 keys to success in, 442 expectations measurements and, 468 unrealistic, 42 of customers, 120–21 after JAD session, 460 about performance, evolutionary prototyping and, 437–38 about schedule and budget, evolutionary prototyping and, 436 about schedule and budget, throwaway prototyping and, 572 Exploring Requirements (Gause and Weinberg), 66 external environment, risks associated with, 90 F Fagan, M. E., 45, 77, 123, 212 Faller, Benoît, 530, 534 families of programs/products, 420–21 features (feature set), 319–44 case study: managing change effectively, 342–43 creeping (creeping requirements), 46–47, 123, 319 (see also requirements (requirements specifications or analysis)) evolutionary prototyping and, 439 kinds of controls, 320–21 outsourcing and, 492 reducing the feature set, 321–31 scrubbing (entirely removing) features, 329–30 features (feature set), creeping (creeping requirements), continued staged delivery and, 555 timebox development and, 581 user-interface prototyping and, 591 versioned development, 330–31 further reading on, 343–44 late-project feature cuts, 341–42 mid-project changes, 331–41 effects of change, 335–36 killer-app syndrome, 332 methods of change control, 338–41 sources of change, 331–32 stopping changes altogether, 336–37 unclear or impossible goals, 332–35 recovery of projects and, 382–84 timebox development and, 580 feature-team structure, 307 Federal Aviation Administration (FAA), air-trafficcontrol workstation software, 320 feedback end-user or customer, evolutionary prototyping and, 437 on measurement data, 474–75 team structure and, 302 Fenton, Norman, 474, 475 firmware, 22 first-order estimation practice, Jones’s, 185 Fisher, Roger, 231, 562 flow time, productivity environments and, 506 focus groups, 241, 242 fourth-generation languages. See 4GLs Fox, Christopher J., 534, 535 Frakes, William B., 534, 535 Freedman, Daniel P., 77 Freeman, Peter, 538 friction customer relationships and, 238 between developers and customers, 41–42 Function Point Analysis (Dreger), 204 function-point estimation, 174–77 further reading on, 204 function-point multipliers, 176 631 Index fundamentals of software development, 8, 15, 51–79, 395 case study: lack of fundamentals, 52–55 following the instructions on, 77–78 further reading on, 66–68 management, 55–60 quality assurance (see quality assurance) technical, 60–66 fuzzy front end, 124 case study, 124–25 wasted time during, 44 G Gaffney, J., 187 gambles, 103 overly optimistic scheduling and, 216 Gates, Bill, 208, 290 Gause, Donald C., 66 Getting to Yes (Fisher and Ury), 222, 231 Gibbs, W. Wayt, xiii, 14, 320, 343 Gilb, Tom, 47, 59, 71, 74, 76, 82, 106, 204, 231, 428, 430, 432, 489, 558 Glass, Robert L., xvii, 76, 82, 206, 215, 216, 352, 367, 369, 416 goals setting, 445 achievement motivation and, 255–57 measurement and, 469 unclear or impossible mid-project changes and, 332–35 minimal-specification approach and, 326 gold-plating minimal-specification approach and, 326–27 timebox development and, 581 Gordon, V. Scott, 431, 435, 436, 438, 439, 441, 443, 571, 597 Grady, Robert B., 478, 479 Graham, Dorothy, 74, 76 Grant, E. E., 12, 217, 249 granularity of data, 471 Gray, T. E., 12, 217, 277 Griss, 531, 534 groups, teams distinguished from, 276 Grove, Andrew S., 60, 265 growth, possibility for, as motivation, 257–58 632 H Hackman, J. Richard, 258, 272 Hall, Tracy, 474, 475 Hamilton, Scott, 14 Handbook of Walkthroughs, Inspections and Technical Reviews (Freedman and Gerald), 77 Hatley, Derek J., 66 Hawthorne effect, 264–65 Hawthorne Works of the Western Electric Company, 263 Heckel, Paul, 317 Henry, S. M., 529, 534 Herbsleb, James, 537 heroics, 41 Herzberg, Frederick, 254, 265, 272 Hetzel, Bill, 52, 56, 57, 76, 351 High Output Management (Grove), 60 high risk, 102–3 Highsmith, Jim, 287 hiring practices, 40 human resources, code-like-hell approach and, 24 Humphrey, Watts S., 28, 44, 211, 502 Hunter, Richard, 262, 321, 540 Hutchens, David H., 74 hygiene factors morale and, 265 motivation and, 265–66 productivity environments and, 506 I Iansiti, Marco, 116, 207, 231 IBM, 70, 72, 174 Santa Teresa office facility, 509, 512 “IBM’s Santa Teresa Laboratory-Architectural Design for Program Development” (McCue), 513 idle periods, long-term teambuilding and, 293–94 IEEE Software (magazine), 479, 538 “Impacts of Life Cycle Models on Software Configuration Management” (Bersoff and Davis), 68 implementation, risks associated with, 91 Index “improvement in progress visibility” entry, in best-practice summary tables, 393 incentives, motivation and, 262 incremental development practices, 152 ineffective development practices, 3. See also classic mistakes inefficient schedule-oriented practices, 20–21 information about risks, 97 team structure and, 302 information hiding, 63 designing for change and, 417–19 planned reuse and, 534 in-house software, defined, 22 In Search of Excellence (Peters and Waterman), 247, 263, 272, 317 Inside RAD (Kerr and Hunter), 262, 321 inspections, 74 further reading on, 76–77 integration of code daily build and smoke test and, 406 staged delivery and, 552 interactive software, defined, 22 interdependence among team members, 286 International Function Point User Groups (IFPUG), 174 J JAD (Joint Application Development), 241, 242, 324–25, 449–63 bottom line on, 461 further reading on, 463 interactions with other practices, 461 managing the risks of, 460 requirements specifications and, 450, 461–62 savings produced by, 450 side effects of, 460–61 using, 451–59 design phase, 451–52, 457–59 keys to success in, 462 planning phase, 451–57 job feedback, work itself as motivation and, 259 job satisfaction, defined, 254 Johnson, Donna L., 474 Joint Application Design (August), 463 Joint Application Development. See JAD Joint Application Development (Wood and Silver), 463 Joint Requirements Planning. See JRP Jones, Capers, xiii, xvii, 12, 42, 45, 46, 48, 57, 66, 67, 69–73, 81, 106, 123, 161, 163, 174, 179, 185, 192, 204, 207, 211, 215, 217, 227, 244, 258, 297, 305, 309, 319, 331, 338, 339, 343, 353, 356, 360, 363, 364–65, 369, 370, 422, 441, 461, 469, 476, 479, 506, 512, 524, 531, 534, 537, 581, 596, 601, 606, 608 Jones’s first-order estimation practice, 176, 185 Joos, Rebecca, 531 JRP (Joint Requirements Planning), 452 jumping into coding, as classic mistake, 45 K Karten, Naomi, 247 Katzenbach, Jon, 275, 296 Kemerer, Chris, 187, 188 Kerr, James, 262, 321, 540, 542 Kidder, Tracy, 23, 540, 542, 544, 545 Kitson, David H., 57, 62, 66, 70 Klein, Gary, 366 Klepper, Robert, 360 Kohen, Eliyezer, 63 Korson, Timothy D., 417 Krantz, Les, xvii L LaFasto, Frank M. J., 41, 256, 257, 279, 285, 288, 291, 294, 296, 299, 300, 302, 317 Lakhanpal, B., 40, 277, 293 languages. See 3GLs (third-generation languages); 4GLs (fourth-generation languages); rapiddevelopment languages (RDLs) Laranjeira, Luiz, 197 large teams, 311–13 Larson, Carl E., 41, 256, 257, 279, 285, 288, 291, 294, 296, 299, 300, 302, 317 late projects, adding people to, 41 leaders (leadership), 283 JAD session, 454 recovery of projects and, 377–78 633 Index leads, technical, 313–14 Lederer, Albert L., xiii, 178, 319 legacy-systems reengineering, outsourcing, 494 lifecycle planning (lifecycle models), 16, 133–43, 151–52, 239. See also specific models case studies effective selection of lifecycle model, 159–61 ineffective selection of lifecycle model, 134–36 choosing the most rapid model, 154–58 code-and-fix model, 140 design-to-schedule, 149–50 design-to-tools, 152–53 evolutionary prototyping, 147–48 incremental development practices, 152 modified waterfall model, 143–47 off-the-shelf software, 153–54 selection of, 465 staged delivery, 148–49 strengths and weaknesses of, 156–58 waterfall model, 136–39 lines-of-code measurements, 477 Lister, Timothy, 12, 28, 41, 217, 231, 249, 271, 276, 288, 290, 293, 295, 334, 506, 511, 513, 543, 603, 608 long-term rapid development, 217 loss of control outsourcing and, 500 signing up and, 544 Lyons, Michael L., 253 M Maguire, Steve, 28, 68, 231, 270, 543, 603, 604, 608 maintainability, evolutionary prototyping and poor, 439 “major interactions and trade-offs” entry, in best-practice summary tables, 394 “major risks” entry, in best-practice summary tables, 394 management (managers). See also Theory-W management of high-performance teams, 288–89 inappropriate involvement of technically inept, as morale killer, 268 634 management (managers), continued manipulation by, as morale killer, 266–67 motivations of, compared to developers, 251–52 overly optimistic scheduling and relationships between developers, 217 recovery of projects and, 377–78 technical leads and, 313–14 management controls, insufficient, as classic mistake, 45 management fundamentals, 55–60 estimation and scheduling, 55 further reading on, 58–60 measurement, 58 planning, 56 tracking, 56–58 management risks, 87–88 Manager’s Guide to Software Engineering, A (Pressman), 59, 75 Managing a Programming Project (Metzger), 56, 59 Managing Expectations (Karten), 247 Managing Software Maniacs (Whitaker), 247 Managing the Software Process (Humphrey), 502 manipulation by management, as morale killer, 266–67 Marciniak, John J., 493, 502 Marcotty, Michael, 68 Martin, James, 131, 293, 308, 317, 366, 452–55, 461, 463, 584, 586, 587 Masters, Stephen, 57, 62, 66, 70 maturity of productivity tools, 356–57 Mayo, Elton, 263 MBTI (Myers-Briggs Type Indicator) test, 253 McCarthy, Jim, 127, 242, 307, 317, 339, 341, 388, 407, 414, 432, 482 McConnell, Steve, xiv, 67, 75, 343, 414, 423, 432, 523, 524 McCue, Gerald M., 513 McDermott, Tim, 530, 534 McGarry, Frank, 12, 249, 277, 360, 469, 475, 530, 534 measurement, 467–79 benefits of using, 468 bottom line on, 478 Index measurement, continued as development fundamental, 58 expectations and, 468 further reading on, 478–79 interactions with other practices, 477 managing the risks of, 476–77 side effects of, 477 using, 469–76 accuracy of data, 476 analysis vs. measurement, 473–74 baseline report, 475 feedback on measurement data, 474–75 Goals, Questions, Metrics practice, 469 granularity, 471–73 keys to success in, 478 limitations, 475–76 measurement group, 469–70 Pareto analysis, 473 what to do with collected data, 473–75 what to measure, 470–73 measurement group, 469 Measures for Excellence (Putnam and Myers), 203 Method for Assessing the Software Engineering Capability of Contractors, A (Humphrey and Sweet), 502 Metzger, Philip W., 56, 59, 221, 543, 603 Microsoft case study: motivation of developers at, 270–71 late-project feature cuts at, 341 “Microsoft Corporation: Office Business Unit” (Iansiti), 231 Microsoft Secrets (Cusumano and Selby), 272, 414, 432 Microsoft Windows NDS tree 3.0, development of, 388, 408, 542–44 Microsoft Word for Windows 1.0 optimistic scheduling practices and, 207–9 Microsoft Visual C++, 242 milestones major, 484 miniature (see miniature milestones) military software, 22 Millington, Don, 236, 328 Mills, Harlan D., 12, 249, 305, 306, 543, 603 miniature milestones, 380–82, 481–89 bottom line on, 488 fine-grain control and, 483 further reading on, 489 generic task lists vs., 487 interactions with other practices, 488 managing risks of, 487 motivation and, 483 schedule risk and, 483 side effects of, 487 status visibility and, 482–83 summary of, 481 using, 484–87 assessing progress and recalibrating or replanning, 486–87 developers’ own mini milestones, 485 “done” and “not done” as only two statuses, 485 early or in response to a crisis, 484 exhaustive set of milestones, 486 keys to success in, 489 for short-term planning, 486 size limit, 485 minimal specifications, 323–29 benefits of, 325–26 keys to success in using, 328–29 lack of support for parallel activities and, 327 risks of, 326–28 wrong reason for using, 328 Modern Structured Analysis (Yourdon), 66 modularity, 63 “Modular Structure of Complex Systems, The” (Parnas, Clements, and Weiss), 423 MOI model, 289 Moore, Dave, 270 morale daily build and smoke test and, 407 defined, 254 factors that lower, 265–69 measurement and, 468 minimal-specification approach and, 325–26 outsourcing and, 500 project recovery and, 377, 379 635 Index morale budget at Microsoft (case study), 270 More Programming Pearls (Bentley), 68 motivation(s), 14 achievement as, 255–57 case studies disheartening lunch with the boss, 250–51 highly motivational environment, 270–71 commitment-based approach and, 23 defined, 254 of developers, 251–54 further reading on, 271–72 heavy-handed campaigns to improve, as morale killer, 269 miniature milestones and, 483 minimal-specification approach and, 325–26 morale killers, 265–69 excessive schedule pressure, 267 heavy-handed motivation campaigns, 269 hygiene factors, 265–66 inappropriate involvement of technically inept management, 268 lack of appreciation for developers, 267 low quality, 269 management manipulation, 266–67 not involving developers, 268–69 productivity barriers, 269 overly optimistic scheduling and, 216 performance reviews and, 265 personal life as, 260–61 pilot projects and, 263–65 possibility for growth as, 257–58 rewards and incentives and, 262–63 technical-supervision opportunity as, 261 timebox development and, 581 undermined, as classic mistake, 40 work itself as, 258–60 motivation (motivation factors), 249–72 mutual trust, teamwork and, 285 Myers, Glenford, 73, 74, 76 Myers, Ware, 14, 182, 192, 518 Myers-Briggs Type Indicator (MBTI) test, 253 Mythical Man-Month, The (Brooks), 305, 317, 369, 418 636 N Naisbitt, John, 258, 508 NASA, 478 Software Engineering Laboratory, 74, 277, 352, 421, 469, 474, 475, 530, 533 negotiation (negotiating skills) estimates and, 221 principled (see principled negotiation method) push-me, pull-me, 47 separating the people from the problem, 222–23 noisy, crowded offices, 41 nominal schedules, 194–96 “No Silver Bullets-Essence and Accidents of Software Engineering” (Brooks), 348 O Object Oriented Analysis and Design (Booch), 67 object-oriented design designing for change and, 421 information hiding, 417–19 planned reuse and, 534 Object-Oriented Design (Coad and Yourdon), 67 object-oriented programming, 367 Object-Oriented Rapid Prototyping (Connell and Shafer), 443 O’Brien, Larry, 358, 370 office space. See productivity environments offshore outsourcing, 496 off-the-shelf software, 153–54 strengths and weaknesses of, 157 O’Grady, Frank, 294 Oldham, Greg R., 258, 272 Olsen, Neil, 18 “One More Time: How Do You Motivate Employees?” (Herzberg), 272 “On the Criteria to Be Used in Decomposing Systems into Modules” (Parnas), 423 opportunistic efficiency, minimal-specification approach and, 326 optimistic schedules, overly, as classic mistake, 44 optimistic scheduling. See scheduling (schedules), overly optimistic Index organization of teams, 13 organization risks, 87–88 outsourcing, 491–502 bottom line on, 501 feature creep and, 492 further reading on, 502 interactions with other practices, 501 managing the risks of, 499–500 planning and, 493 requirements specification and, 492 reusable components and, 492 side effects of, 501 staffing flexibility and, 492 summary of, 491 time savings with, 492 using, 493–99 communication with the vendor, 493 contract considerations, 498–99 contract management, 493 double standards for outsourced work, 495 keys to success in, 501–2 kinds of arrangements, 495–96 legacy-systems reengineering, 494 management plan including risk management, 493 offshore outsourcing, 496–97 technical resources, 493–94 unstable requirements, 494 vendor evaluation, 497–98 overrun, project, 94 overtime desire for free, as rapid-development lookalike, 115 voluntary (see voluntary overtime) ownership, achievement motivation and, 255 P padding of estimates, 184 Page-Jones, Meilir, 67 Papaccio, 45, 62, 71, 212, 335, 585 Pareto analysis, 473 Parnas, David L., 417, 419, 420, 422, 423 Peat Marwick, 82 penalty for breaking the build, 410 people (peopleware issues). See also personnel (employees; staff) classic mistakes related to, 40 as dimension of development speed, 11, 12–14 recovery of projects and, 376–79 Peopleware (DeMarco and Lister), 28, 231, 271, 513, 608 perception of development speed, 119–21 overcoming the perception of slow development, 121 performance product, evolutionary prototyping and, 437 team structure and monitoring of, 302 performance reviews, motivation and, 265 per-person-efficiency trade-offs, 127 Perry, Dewayne E., 476 personal life, as motivation, 260–61 personnel (employees; staff). See also people (peopleware issues) evaluations of, measurements misused for, 476 long-term teambuilding and, 293 outsourcing and staffing flexibility, 492 problem, 40–41 failure of teams and, 291–92 recovery of projects and, 377 risks associated with, 90 weak, 40 Peters, Chris, 255, 324, 413, 536 Peters, Tomas J., 247, 258, 263, 265, 271, 272, 317 Pfleeger, Shari Lawrence, 441, 533–36 philosophy, best practices and, 395 Pietrasanta, Alfred M., 14, 474 Pirbhai, Imtiaz A., 66 planned reuse, 531–35 planning (plans), 486 abandonment of, under pressure, as classic mistake, 44 change, 419 customer-oriented, 239 as development fundamental, 56 insufficient, as classic mistake, 44 outsourcing and, 493 637 Index planning (plans), continued overly optimistic scheduling and adherence to, 211 overly optimistic scheduling and quality of, 211 risk-management, 85, 96–97 Plauger, P. J., 28, 66–67 point-of-departure spec, 324 politics placed over substance, 43 postmortems, interim, 102 “potential reduction from nominal schedule” entry, 392 Practical Guide to Structured Systems Design, The (Page-Jones), 67 Practical Software Metrics for Project Management and Process Improvement (Grady), 479 Prasad, Jayesh, xiii, 178, 319 precision, accuracy vs., 173 predictability, as rapid-development look-alike, 113 premature convergence, overly optimistic scheduling and, 213–15 premature releases, daily build and smoke test and, 412–13 presentation styles, for estimates, 179–82 Pressman, Roger S., 59, 75, 79 principled negotiation method, 222–29, 503 degrees of freedom in planning a software project and, 225–26 focusing on interests, not positions, 224–25 insisting on using objective criteria, 227–29 inventing options for mutual gain, 225–27 separating the people from the problem, 222–23 Principles of Software Engineering Management (Gilb), 59, 71, 106, 204, 231, 428, 430, 432, 489, 558 prioritization of risks, 85, 94–96 problem-resolution team, 300 process abuse of focus on, 14–15 classic mistakes related to, 44–46 as dimension of development speed, 11, 14–16 recovery of projects and, 379–82 risks associated with, 91 638 product. See also features (feature set) as dimension of development speed, 11, 17 risks associated with, 89–90 trade-offs among schedule, cost, and, 126–27 product characteristics, 17 product families, 420–21 productivity barriers to, 269 failure of teams and, 290 classic mistakes and (see classic mistakes) factors that affect, 37–38 lines-of-code measurement of, 477 motivation and, 40 peopleware issues and, 12, 13 per-person-efficiency trade-offs and, 127 team permanent-team strategy, 292 variations in, 276–77 variations in, 14 among groups (teams), 12 among individuals, 12 productivity environments, 505–13 bottom line on, 511 further reading on, 513 interactions with other practices, 511 managing the risks of, 510–11 side effects of, 511 summary of, 505 using, 508–9 keys to success in, 513 productivity tools, 345–70 acquisition of, 353–58 criteria for tool selection, 356–58 risks of setting up a tools group, 355 tools person or group, 354–55 case studies effective tool use, 368–69 ineffective tool use, 346–47 defined, 346 further reading on, 369–70 maturity of, 356–57 Index productivity tools, continued role in rapid development, 348–52 areas of special applicability, 349–50 limitations of productivity tools, 350–51 ultimate role of productivity tools, 351–52 silver-bullet syndrome (see silver bullets (silver-bullet syndrome)) strategy for using, 352–53 use of, 358–63 how much schedule reduction to expect, 360–63 training, 359 when to deploy, 358–59 product size, 17 professional-athletic-team model, 309–10 program families, 420–21 Programming on Purpose (Plauger), 66 Programming on Purpose II (Plauger), 28 Programming Pearls (Bentley), 68 “Project Pathology: A Study of Project Failures” (Thomsett), 388 project recovery. See recovery of projects prototyping. See also evolutionary prototyping lifecycle model; throwaway prototyping; user-interface prototyping prototyping, rapid, use of term, 434 prototyping, demonstration, 591 Psychology of Computer Programming, The (Weinberg), 272 Putnam, Lawrence H., 14, 182, 192, 203, 518 Q quality lines-of-code measurement of, 477 low daily build and smoke test and, 406 as morale killer, 269 overly optimistic scheduling and, 215–16 trade-offs, 127 quality assurance fundamentals of, 69–77 defect rates, 69–71 error-prone modules, 72 quality assurance, continued further reading on, 75–77 technical reviews, 73–75 testing, 72–73 purposes of, 15 shortchanged, as classic mistake, 45 Quality Software Management (Weinberg), 58, 218, 388, 608 R RAD (Rapid Application Development), 366 Rapid Application Development (Martin), 131, 317, 366, 463, 587 rapid development. See also speed of development approaches that contribute to, 130 core issues in, 109–31 further reading, 131 kind of rapid development needed, 111–16 odds of completing on time, 116–19 perception and reality of development speed, 119–21 trade-off decisions, 126–27 typical schedule-improvement pattern, 128–30 variations in development practices, 109–11 where most of the time is spent, 122–25 customers’ importance to, 236–38 defined, 2 general kinds of, 18–21 all-out rapid development, 20–21 efficient development, 19–20 efficient development tilted toward best schedule, 20 look-alikes, 113–15 trade-offs among different approaches to, 11, 18 ways of attaining, 2–3 rapid-development languages (RDLs), 515 bottom line on, 522–23 examples of, 516 interactions with other practices, 522 keys to success in, 523 managing the risks of, 520 side effects of, 522 using, 519 639 Index rapid prototyping, use of term, 434 “Rapid Prototyping: Lessons Learned” (Gordon and Bieman), 443 Raytheon, 14, 15, 21 RDLs. See rapid-development languages “Realities of Off-Shore Reengineering” (Dedene and De Vreese), 502 real-time systems, 22 recalibration of schedules, 199–200 recognition. See appreciation (recognition) recovery of projects, 371–88. See also features (feature set), late-project feature cuts and mid-project changes case studies successful project recovery, 385–87 unsuccessful project recovery, 372–73 characteristics of projects in trouble, 371–72 fundamental approaches to, 373–74 philosophy of this book’s approach to, 374 plan for, 375–85 first steps, 376 people issues, 376–79 process, 379–82 product, 382 timing, 385 reestimation mistakes, 46 Reifer, Donald J., 493, 502 Reinertsen, D. G., 131 Reinventing the Corporation (Naisbitt and Aburdene), 258 release cycles, short, 339 requirements (requirements specifications or analysis), 61–62, 124 case studies, 234–36, 246 creeping (feature creep) (see features (feature set), creeping (creeping requirements)) customer-oriented, 239–42, 338 detailed, 321–23 further reading on, 66 gold-plating, 46 JAD and, 450, 461–62 minimal specification, 323–29 benefits of, 325–26 keys to success in using, 328–29 640 requirements (requirements specifications or analysis), continued lack of support for parallel activities, 327 omission of key requirements, 326 risks of, 326–28 unclear or impossible goals, 326 wrong reason for using, 328 outsourcing and, 492, 494 recovery of projects and, 382–84 risks associated with, 89 scrubbing (entirely removing), 329–30, 525 research-oriented development, 47 resources, targeting, 16 reuse (reusable components), 527–38 bottom line on, 537 further reading on, 538 interactions with other practices, 536–37 keys to success in using, 537–38 managing the risks of, 535–36 outsourcing and, 492 side effects of, 536 summary of, 527 using, 528–35 opportunistic reuse, 528–31 planned reuse, 531–35 rewards case study: at Microsoft, 270–71 motivation and, 262 rework amount of time spent on, 123 avoidance of, 15 customer relationships and, 237 Rich, Charles, 366, 367 Rifkin, Stan, 468, 474 risk analysis, 85, 91–94 risk assessment, elements of, 85 risk, assuming, 98 risk control, 85, 96–102 risk exposure, 92 risk identification, 85–91 complete list of schedule risks, 86–91 contractor risks, 89 customer risks, 88–89 design and implementation risks, 91 Index risk identification, continued development environment risks, 88 end-user risks, 88 external environment risks, 90 organization and management risks, 87–88 personnel risks, 90 process risks, 91 product risks, 89–90 requirements risks, 89 schedule creation risks, 87 general risks, 85–86 most common schedule risks, 86 risk management, 8, 15, 81–106. See also managing-risks sections of best-practice Chapters 17–43 case studies, 82–83 systematic risk management, 103–5 disadvantages of, 82 elements of, 84 further reading on, 106 insufficient, as classic mistake, 44 levels of, 84 outsourcing and, 493 risk-management planning, 85, 96–97 risk monitoring, 85, 100–102 risk officer, 102 risk prioritization, 85, 94–96 risk quantification, estimation and, 180–81 risk resolution, 85, 97–100 risks, 102 customers-related, 237 roles, clear, team structure and, 302 Ross, Rony, 484, 559, 568 Rothfeder, Jeffrey, 82 runaway prevention, as rapid-development lookalike, 113 Rush, Gary W., 461 Russell, Glen W., 74 S Sackman, H., 12, 217, 249 Saiedian, Hossein, 14 sashimi model, 143–44 schedule compression, 191–92 schedule constraints, 112–13 schedule creation risks, 87 schedule estimation, 183–85 schedule-oriented practices, 3, 9, 10. See also speed of development; visibility-oriented practices focusing only on, 10 focusing too much on single, 5 kinds of, 4 schedule pressure beating, 220–29 build and smoke testing even under, 411 excessive, as morale killer, 267 schedule recalibration, 199–200 schedule risk, miniature milestones and, 483 schedule-risk-oriented practices. See fundamentals of software development scheduling (schedules), 205–31 beating schedule pressure, 220–29 principled negotiation, 222–29 case study: a successful schedule negotiation, 229–30 commitment-based, 184 as development fundamental, 55 further reading on, 231 overly optimistic, 207–20 accuracy of schedules and, 210 adherence to plan and, 211 bottom line and, 218–20 burnout and, 217 creativity and, 216–17 customer relations and, 213 developer-manager relationships and, 217 effects of, 210–15 example of, 207 gambling and, 216 long-term rapid development and, 217 motivation and, 216 premature convergence, 213–15 project focus and, 213 quality and, 215–16 quality of project planning and, 211 root causes of, 209–10 underscoping the project and, 212 641 Index scheduling (schedules), continued overly optimistic, as classic mistake, 44 principled negotiation method and, 222–29 probability of meeting, 116–19 timebox development and, 580 trade-offs among product, cost, and, 126 Scherr, Allen, 285, 540, 542 Scholtz, 367, 421, 534 Schulman, Edward, 12, 217, 255, 276 scientific software, 22 scribe, in JAD sessions, 455 scrubbing (entirely removing) features, 329–30, 525 search-and-rescue team model, 307–8 Seewaldt, T., 12, 217, 277 Selby, Richard, 74, 116, 255, 270, 272, 290, 324, 327, 341, 407, 414, 432, 536 Shafer, Linda, 443 Shaw, 436 Shen, V. Y., 479 Sherman, Roger, 138 shortest possible schedules, 188–92 efficient schedules and, 193 Showstopper! (Zachary), 272, 388 shrink-wrap software, defined, 22 Shultz, Scott, 586 signing up, 539–45 bottom line on, 545 in different environments, 542 further reading on, 545 giving people a choice about, 541 interactions with other practices, 544–45 keys to success in using, 545 managing the risks of, 543–44 shaky requirements and, 542 side effects of, 544 at the team level, 541 vision and, 540 Silver, Denise, 463 silver bullets (silver-bullet syndrome), 47–48, 363–69, 520–21 biting the bullet, 367–68 identifying, 365–67 642 Sims, James, 452 size estimation, 174–82 definition of size, 175 function-point estimation, 174 size of product, 17 skill variety, work itself as motivation and, 258 skunkworks-team structure, 306–7 Smith, Douglas, 296, 275 Smith, P. G., 131 smoke test. See daily build and smoke test Sobol, Marion G., 366 software, types of, 22 Software Acquisition Management (Marciniak and Reifer), 493, 502 Software Configuration Management (Babich), 68 Software Configuration Management (Bersoff), 68 software configuration management (SCM) as development fundamental, 65–66 further reading on, 68 software development. See fundamentals of software development; rapid development software engineering, further reading on, 79 Software Engineering (Pressman), 75, 79 Software Engineering (Sommerville), 75, 79 Software Engineering Economics (Boehm), 203 Software Engineering Institute, 14, 28, 55, 57, 59, 66, 70, 374, 502 Software Engineering Laboratory (NASA), 74, 277, 352, 421, 469, 474, 475, 530, 533 Software Engineering Metrics and Models (Conte et al.), 479 “Software Engineering Under Deadline Pressure” (Costello), 231 Software Implementation (Marcotty), 68 Software Inspection (Gilb and Graham), 76 Software Measurement Guidebook (NASA), 478 Software Metrics (Grady and Caswell), 478 “Software Productivity Research Programming Languages Table” (Jones), 524 Software Risk Management (Boehm), 106, 161 “Software Risk Management: Principles and Practices” (Boehm), 106 “Software’s Chronic Crisis” (Gibbs), 343 Index Software Sizing and Estimating (Symons), 204 Sommerville, Ian, 75, 79 Soul of a New Machine, The (Kidder), 540, 542, 545 source-code control, lack of automated, as classic mistake, 48 specifications. See requirements (requirements specifications or analysis) speed of development. See also rapid development classic mistakes and, 37–39 dimensions of, 11–18 people, 12–14 process, 14–16 product, 17 relative importance of, 21–22 technology, 17–18 typical pattern for improving, 128–30 spiral lifecycle model, 141–43, 547 strengths and weaknesses of, 156 sponsorship of projects, lack of effective, 42 staff. See personnel (employees; staff) staged delivery lifecycle model, 148–49, 425, 426, 549–58 benefits of, 550–52 bottom line on, 558 further reading on, 558 interactions with other practices, 557 keys to success in using, 558 managing the risks of, 555–56 rigidity of, 557 side effects of, 556–57 strengths and weaknesses of, 157 summary of, 549 using, 552–55 developer focus, 553–54 kinds of projects, 554–55 technical dependencies, 553 theme releases, 554 Stahl, Leslie Hulet, 161 Standish Group, xiii, 42, 61, 211, 236, 243, 319 Stapleton, Jennifer, 236, 328 startup costs, long-term teambuilding and, 293 statistics, overreliance on, 475 status-oriented office improvements, lost productivity from, 510 status visibility measurement and, 468 miniature milestones and, 482–83 signing up and, 543 user-interface prototyping and, 591 Staudenmayer, Nancy A., 476 storyboards, minimal specification and, 324 Strategies for Real-Time System Specification (Hatley and Pirbhai), 66 strategy for rapid development alternative (commitment-based or code-like-hell approach), 22–25 case studies absence of a clear, 6 clear, 25 in general, 8–10 stress, software errors and, 215 Structured Design (Yourdon and Constantine), 67 SWAT team model, 308–9 Sweet, W. L., 502 Symons, Charles, xiii, 171, 179, 192, 204, 351 synergy, 18 systems software, defined, 22 T tactical-execution team, 300–301 task identity, work itself as motivation and, 258 task significance, work itself as motivation and, 258–59 team identity lack of, 289 sense of, 281 teams (team projects). See also teamwork organization of, 13 productivity differences among, 12, 13 staff selection for, 13 team structure, 297–317 best for rapid development, 302–3 case studies good match between project objectives and team structure, 315–16 643 Index team structure, case studies, continued mismatch between project objectives and team structure, 297–99 clear roles and accountabilities and, 302 effective communication and, 302 fact-based decision making and, 302 kinds of, 300–301 managers and technical leads, 313–14 models of, 304–13 business team, 304 chief-programmer team, 305–6 feature team, 307 large teams, 311–13 professional athletic team, 309 search-and-rescue team, 307–8 skunkworks team, 306–7 SWAT team, 308–9 theater team, 310–11 monitoring of individual performance and providing feedback, 302 teamwork, 273–94 case studies high-performance team, 277–78 poor group dynamics, 274–75 selection of team members, 282–83 creating a high-performance team, 278–89 challenging work, 280–81 characteristics of a high-performance team, 278–79 commitment to the team, 285 competent team members, 282–84 effective communication, 286 failure of teams, 289–92 high level of enjoyment, 288 interdependence among members, 286 long-term teambuilding, 292–94 managing a high-performance team, 288–89 mix of roles and skills, 283–84 mutual trust, 285 results-driven structure, 282 sense of autonomy, 286–87 sense of empowerment, 287 sense of team identity, 281 644 teamwork, creating a high-performance team, continued shared vision or goal, 279 small team size, 287–88 groups distinguished from teams, 276 importance to rapid development, 276–78 software uses of, 275–76 summary of guidelines, 294–95 Teamwork (Larson and LaFasto), 316 technical fundamentals, 60–66 construction, 64–65 design, 62–63 requirements management, 61–62 software configuration management (SCM), 65 technical leads, 313–14 technical reviews comment on, 74–75 as development fundamental, 73–75 further reading on, 76–77 technical-supervision opportunities, as motivation, 261 technology classic mistakes related to, 47–48 as dimension of development speed, 11, 17–18 “Ten Commandments of Tool Selection, The” (O’Brien), 370 Tesch, Deborah B., 366 testing as development fundamental, 72–73 further reading on, 76 Thayer, Richard H., 59 theater team model, 310–11 theme for the product, minimal specification and, 324 theme releases, staged delivery and, 554 Theory-W management, 239, 376, 559–68 bottom line on, 568 further reading on, 568 interactions with other practices, 567–68 managing the risks of, 566–67 side effects of, 567 summary of, 559 Index Theory-W management, continued using, 561–66 keys to success in, 568 kinds of projects that can use Theory-W, 566 manager’s role, 566 step 1: establish win-win preconditions, 562–64 step 2: structure a win-win software process, 564 step 3: structure a win-win software product, 565 “Theory-W Software Project Management: Principles and Examples” (Boehm and Ross), 568 third-generation languages. See 3GLs Third Wave Project Management (Thomsett), 106 Thomsett, Rob, 42, 106, 253, 293, 294, 317, 388 Thriving on Chaos (Peters), 258 throwaway prototyping, 433, 569–73 evolutionary prototyping and, 441 user-interface prototyping and, 591–92 timebox development, 575–83 bottom line on, 582 further reading on, 583 interactions with other practices, 582 managing the risks of, 581 side effects of, 581 using, 577–80 entrance criteria, 579 keys to success in, 583 timebox team, 580 variations, 580 tools group, 585 acquisition of productivity tools and, 354–56 summary of, 585 Top-10 Risks List, 100–102, 587 Townsend, Robert, 287 tracking, as development fundamental, 56–58 Tracz, Will, 534–36, 538 trade-offs per-person-efficiency, 127 quality, 127 among schedule, cost, and product, 126–27 training, in productivity tool use, 357 transferring risks, 97 trust failure of teams and lack of, 290 among team members, 285 Turner, Albert J., 556 turnover long-term teambuilding and, 293 overly optimistic scheduling and, 217 Tutorial on Software Design Techniques (Parnas), 423 Tutorial: Software Engineering Project Management (Thayer), 59 Tutorial: Software Quality Assurance (Chow, ed.), 76 U Udell, John, 538 Umphress, David, 49 underscoping, overly optimistic scheduling and, 212 unrealistic expectations, 42 of customers, 120–21 after JAD session, 460 about performance, evolutionary prototyping and, 437–38 about schedule and budget evolutionary prototyping and, 436 throwaway prototyping and, 572 upstream activities, shortchanged, 45 Ury, William, 231, 562 user-interface prototyping, 324, 589–97 bottom line on, 597 interactions with other practices, 596 managing the risks of, 595–96 side effects of, 596 summary of, 589 using, 591–95 choosing a prototyping language, 592 end-user feedback and involvement, 593–94 evolutionary vs. throwaway prototyping, 591–92 finished product, 595 645 Index user-interface prototyping, using, continued keys to success in, 597 prototype as a Hollywood movie facade, 593 user manual, as specification, 324 V Vaishnavi, Vijay K., 417 Valett, J., 12, 249, 277, 469 van Genuchten, Michiel, 46, 184, 199, 210, 486 vendors. See outsourcing version 1 of productivity tools, 356 version 2 features, 339 of productivity tools, 357 version 3 of productivity tools, 357 versioned development, 330–31 visibility-oriented practices, 10 vision failure of teams and lack of, 289 managing a high-performance team and, 288 sharing a, in high-performance teams, 279 signing up and, 540 vision statement, minimal specification and, 324 Visual C++, Microsoft, 242 voluntary overtime, 599–608 bottom line on, 606–7 further reading on, 608 interactions with other practices, 606 managing the risks of, 605–6 side effects of, 606 summary of, 599 using, 600–605 caution about too much overtime, 604–5 developer-pull vs. leader-push approach, 600–601 how much overtime to ask for, 603 keys to success in, 607 out-of-control projects, 603 overtime should not be mandatory, 601–3 Vosburgh, J. B., 37, 38, 60, 240, 241, 256, 319, 335 Votta, Lawrence G., 476 646 W Waligora, Sharon, 530, 534 walkthroughs, 73 estimating by, 178 wasted effort, minimal-specification approach and avoidance of, 326 wasted time, during fuzzy front end, 44 waterfall lifecycle model, 136–39 modified, 143–47 with risk reduction, 146 sashimi model, 143 with subprojects, 145 strengths and weaknesses of, 156 Waterman, Robert H., Jr., 247, 263, 271, 272, 317 Waters, Richard, 366, 367 Weinberg, Gerald M., 12, 40–41, 58, 66, 77, 217, 218, 221, 255, 271, 272, 276, 296, 388, 600, 601, 608 Weiss, David M., 423, 469 “What Are the Realities of Software Productivity/ Quality Improvements” (Glass), 369 “When the Rubber Hits the Road: A Guide to Implementing Self-Managing Teams” (Thomsett), 317 Whitaker, Ken, 247, 267, 554 Why Does Software Cost So Much? (DeMarco), 231, 317 “Why Is Technology Transfer So Hard?” (Jones), 370 Wicked Problems, Righteous Solutions (DeGrace and Stahl), 161 Wiener, Lauren Ruth, 321 Windows NT 3.0, development of, 388, 408, 542–44 win-win projects, 239 WinWord (Microsoft Word for Windows) 1.0 optimistic scheduling practices and, 207–9 Wirth, Niklaus, 319 Wisdom of Teams, The (Katzenbach and Smith), 275 Index wishful thinking, 43 schedules and, 221 Witness (film), 273, 296 Wood, Jane, 463 Word for Windows 1.0, optimistic scheduling practices and, 207–9 work environments. See also productivity environments noisy, crowded, 41 Work Redesign (Hackman and Oldham), 272 Wosser, 531, 534 Writing Solid Code (Maguire), 68 Y Yourdon, Edward, 66–67, 73, 337, 423 Z Zachary, Pascal, 23, 271, 272, 388, 408, 412, 413, 542, 544 Zawacki, Robert, xviii, 259, 351 Zelkowitz, 351 647