The Bluffer’s Guide to A Spiral Model of Software Development and Enhancement

When we’re talking about iterative and incremental software development, it’s easy to forget that it wasn’t all that long ago – well, okay, if you’re my age it doesn’t seem all that long ago – when that wasn’t really a thing.

In the formative years of our profession, in the 1970s, there were three schools of thought that dominated the software design and development methodology landscape:

The “Waterfall” Model – a sequence of development activities (requirements, design, coding, testing, operations), with some basic level of iteration built in to allow us to revisit earlier activities (“do it twice”)

Structured Programming – often seen working with the Waterfall model in the 1970s (the Martin Scorsese and Robert De Niro of software methodology)

Rapid Prototyping – a highly iterative, customer-centric approach to design that used a series of low-fi prototypes to pin down the details before “building it properly”.

In his seminal 1986 paper, A Spiral Model of Software Development and Enhancement, Barry Boehm discusses the drawbacks and risks of these dominant approaches, and then proposes a new model of the software development life cycle that incorporates their strengths and addresses their weaknesses.

This model emphasizes iterative development, risk analysis, and prototyping. It’s designed to combine elements of both the waterfall model and prototyping approaches, ensuring a systematic, but flexible software development process. The Spiral Model’s key feature is its focus on early identification and reduction of project risks, and it allows for incremental refinement of the product through successive spirals or phases. This approach has had a significant influence on how software development is approached, especially in complex, high-risk projects.

Key Points

  1. Introduction of the Spiral Model: Presents a new approach to software development, combining elements from the traditional Waterfall Model and prototyping, aiming for a systematic yet adaptable process.
  2. Iterative Development: The model emphasizes iterative cycles of development, where each cycle includes stages of planning, risk analysis, engineering, and evaluation.
  3. Risk Analysis: Central to the model, it focuses on the early identification and continuous management of risks throughout the project’s lifecycle.
  4. Prototyping: Utilizes prototyping extensively for refining requirements and design, allowing for user feedback and system adaptation.
  5. Phased Approach: Breaks down development into smaller segments, each encompassing objective setting, risk assessment, development, and validation.
  6. User Involvement: Stresses high levels of customer involvement to ensure that the evolving system aligns with user needs.
  7. Continuous Testing: In the Spiral Model, testing is an integral and continuous activity that occurs throughout the development process.
  8. Flexibility and Adaptability: Highlights the model’s adaptability to changes and varying risk profiles, making it suitable for large, complex, and high-risk projects.
  9. Comparison with Other Models: Discusses advantages over traditional models like Waterfall, particularly in addressing uncertainty and risk.
  10. Implementation Guidelines: Provides practical guidance for implementing the Spiral Model, emphasizing expert judgment in risk evaluation.

The influence of Boehm’s paper can be seen in pretty much all of the software development methodologies that followed. Anybody that proposes not approaching development in an iterative and incremental manner these days will be asked to hand in their methodologist badge and gun. Software development since 1986 has mostly been variations on the Spiral Model theme. Some approaches may take it to eXtremes, of course. But that’s another Bluffer’s Guide for another day.

The Bluffer’s Guide to Managing The Development of Large Software Systems

In today’s super-compact summary of a seminal work in software development, we’re going to open the can of worms that is Winston Royce’s 1970 paper Managing The Development of Large Software Systems.

Royce’s paper is often credited with introducing the Waterfall model of software development. Royce outlines a sequential design process with distinct phases such as requirements, design, implementation, verification, and maintenance. He emphasizes the importance of thorough documentation, rigorous step-by-step development, and early stage planning.

However, Royce also identifies the obvious shortcomings of this model, particularly its inflexibility and the difficulty of accommodating changes after the process begins. The paper has had a significant impact on software development methodologies, shaping how large-scale projects are approached.

It is often misinterpreted as an endorsement of the Waterfall model. In reality, Royce did not advocate for the Waterfall model in its strict, linear form. While he presented what later became known as the Waterfall model (he never named it as such in his paper, but others used “Waterfall” to describe this linear model later – it’s kind of like “flying saucers” in that respect), he actually highlighted its limitations and risks, particularly the challenge of accommodating changes once the project is underway.

Royce’s key argument was for the incorporation of iterative elements, such as feedback loops and overlapping development phases, to address these shortcomings. He emphasized the need for more flexibility and adaptability in the software development process. Thus, his paper, often cited as the origin of the Waterfall model, was more a critique and a call for a more iterative approach than an endorsement of the rigid, linear process that the Waterfall model is known for.

In the decades that followed, many managers continued to misinterpret the intention of the paper, forcing teams to attempt the impossible – i.e., get it right first time. And in this sense, Royce’s paper has unintentionally caused considerable misery for countless thousands of development teams. Those managers, of course, probably never actually read the paper. Hence my parallel with “flying saucers” (the private pilot whose sighting in 1947 of unidentified aircraft moving at supersonic speeds never said they looked like saucers. But after news outlets reported them as “flying saucers”, suddenly everybody was reporting saucer-shaped craft.)

There are still many, many teams out there looking for “flying saucers” when in reality no such thing likely exists. In practice, we never get it right first time – we always have to iterate. The main difference being that teams who claim to be following a Waterfall process have a first iteration that’s reeeeaaally long, and then in the run-up to a release (the “testing phase”), they start iterating rapidly to “get the ball in the hole”. In reality, there’s no such thing as “Waterfall” software development.

Key Points

  1. Sequential Phases: Royce described a sequential approach to software development with distinct phases: requirements specification, design, coding, testing, and operations.
  2. Documentation Importance: He emphasized the importance of documentation at each phase for effective communication and coordination.
  3. Early Stage Planning: Detailed planning in the early stages of the project was highlighted as crucial for success.
  4. Testing and Debugging: The paper stressed the importance of thorough testing and debugging in the later stages of development.
  5. Client Involvement: Royce advocated for involving the client at early stages and maintaining constant communication.
  6. Model Limitations: He identified limitations of the sequential model, particularly its inflexibility and the difficulty in accommodating changes.
  7. Iterative Elements: Royce recommended iterative elements, like overlapping development phases and prototyping, to address these limitations.
  8. Risk of Sequential Approach: The paper pointed out the risks associated with a purely sequential approach, especially for large, complex projects.
  9. Critique of Linear Models: Royce’s critique of linear, non-iterative models underlined the need for more adaptable and flexible methodologies in software development.

While Royce does indeed advocate for a more iterative approach to development, it’s doubtful he ever envisaged the level of iteration that’s been favoured since (he talks in the paper about “doing it twice” – yeah, and then do it some more!). And he still places great emphasis on the importance of planning and documentation: two activities we now know have limited value compared to feedback and active collaboration.

But for those who actually read the paper, there’s no denying it would have been a light bulb moment back then. Its influence in later thinking about the software development life cycle (e.g., Barry Boehm’s Spiral Model, proposed in 1986) is apparent. At the very least, “Managing The Development of Large Software Systems” started a conversation that’s still going on today.

The Bluffer’s Guide to The Design of Everyday Things

I continue my series of hyper-distilled summaries of seminal works in the field of software development with a book that doesn’t actually come from the field of software development, but that’s had a huge impact on the way we design user interfaces.

The Design of Everyday Things by Donald A. Norman, published in 1988, focuses on the design principles of everyday objects, emphasizing user-friendly and intuitive interfaces. Norman introduces concepts like affordances, signifiers, and feedback, which have become fundamental in software design. His insistence on user-centered design reshaped how designers approach software interfaces, prioritizing ease of use and user experience. The book’s influence is evident in the emphasis on usability in modern software development, leading to more intuitive and accessible digital products. Norman’s work highlights the importance of understanding the user’s perspective in the design process, making it a cornerstone in the field of human-computer interaction.

Key Points

  • Affordances: Objects should indicate how they can be used. An affordance is a quality of an object that suggests its function, like a handle on a door.
  • Signifiers: These are signals or symbols that indicate what action to take, like push or pull labels on doors.
  • Mapping: The relationship between controls and their effects should be clear and logical. For example, a switch designed for a light should be positioned in a way that intuitively suggests which light it controls.
  • Feedback: Users should receive immediate and clear feedback from their actions. For instance, when a button is pressed, there should be an indication that it has been activated.
  • Constraints: Design should limit the actions that can be performed, preventing error. For example, a USB plug that only fits one way prevents incorrect insertion.
  • Error Tolerance and Recovery: Designs should anticipate possible errors and allow for easy recovery from them, minimizing the cost of mistakes.
  • User-Centered Design: Focus on the needs, abilities, and limitations of the user. Design should not force users to adapt to the system; rather, systems should be built to suit users.
  • Conceptual Models: Users should be able to form a good mental model of how a system works to use it effectively.

Norman’s book revolutionized how designers think about user interaction, emphasizing the importance of understanding the user’s perspective and designing intuitive, user-friendly interfaces. These principles, while illustrated through everyday objects, have been widely applied in software design, enhancing usability and user experience in digital products.

  1. Affordances
    • Example: In a drawing app, a pencil icon represents a tool for drawing lines. The icon’s appearance suggests its function, making it intuitive for users to understand and use.
  2. Signifiers
    • Example: In web forms, asterisks (*) next to certain fields signify that they are required. This helps guide the user in completing the form correctly.
  3. Mapping
    • Example: In a music player application, the volume slider moves from left (low volume) to right (high volume), mapping spatially to the concept of increasing and decreasing volume.
  4. Feedback
    • Example: When a user submits a form on a website, a message appears confirming the submission or indicating errors to be corrected. This feedback helps users understand the result of their action.
  5. Constraints
    • Example: In an online payment form, the credit card number field only allows the entry of numbers and automatically formats them into groups of four, preventing typing errors.
  6. Error Tolerance and Recovery
    • Example: An email client auto-saves drafts, so if the application crashes or the user accidentally closes it, the written email isn’t lost and can be easily recovered.
  7. User-Centered Design
    • Example: A ride-sharing app uses large buttons, clear labels, and a simple interface to accommodate users who might not be tech-savvy, ensuring the app is accessible to a wide range of users.
  8. Conceptual Models
    • Example: A file management system uses a ‘folder’ metaphor, where files can be ‘placed’ into folders. This model is easy for users to understand as it mimics the physical filing systems familiar to them.

Each of these principles helps to create more intuitive, user-friendly software interfaces, improving the overall user experience and making technology more accessible and effective.

If you’re familiar with the work of usability expert Jakob Nielsen, you may have noticed a similarity between Norman’s principles of user-centred design and Nielsen’s user interface design heuristics. Indeed, Nielsen and Norman formed a consulting company together – the Nielsen Norman Group – to offer training and guidance on user experience design. It’s a small world!

It’s easy to forget just how little attention was paid to the design of user interfaces until GUIs became popular in the 1980s. For many, user experience happened at a command line or on a green screen.

And it’s frustrating to see how quickly we forgot much of what we’d learned about human-computer interaction (HCI) design in the GUI age when the dotcom era exploded, and user experience was reinvented as an interactive document design discipline. Today, it’s not at all uncommon to have to deal with sucky user experiences that look amazing.

These principles are as relevant today as they were in 1988 – and not just to software – and arguably should be considered part of a developer’s arsenal of software design skills. How many user experiences could be dramatically improved? Yeah, I’m looking at you, Open Source Software!

The Bluffer’s Guide to Structured Programming

My series of hyper-distilled summaries of some of the most influential books and papers about software development continues with a work that has had a wide influence not just on the way we write code today, but on the tools and technologies we use to write it.

Structured Programming by Ole-Johan Dahl, Edsger W. Dijkstra, and C. A. R. Hoare is a seminal work in computer science published in 1972 that advocates for the use of structured programming techniques to improve software reliability and clarity. The book criticizes traditional programming practices for their complexity and error-proneness, and promotes modular design, top-down approach, and the use of control structures like loops and conditionals instead of goto statements. It significantly influenced software development methodologies, leading to more maintainable, efficient, and understandable code, laying a foundation for modern programming practices.

Key Points

  • Critique of Traditional Programming: It critically assesses the traditional ad hoc programming practices that were common before its publication. These practices were often complex and led to error-prone code.
    1. Overuse of Goto Statements: Traditional programming often relied heavily on the goto statement, leading to what Dijkstra famously described as “spaghetti code” – complex and tangled code structures that were difficult to follow and maintain.
    2. Lack of Modularity: Older programming methods typically didn’t emphasize breaking down a program into distinct, reusable modules. This lack of modularity resulted in code that was hard to understand, debug, and modify.
    3. Poor Readability and Maintainability: Traditional code, with its complex flow and lack of structure, was often hard to read and understand. This complexity made maintenance a challenging and error-prone task.
    4. Ad-hoc Development Approach: Earlier programming practices did not always follow a systematic approach, leading to ad-hoc solutions that could be inefficient and unreliable.
    5. Error-Prone and Inefficient: Without structured methods, programs were more susceptible to errors, and debugging was more challenging. The lack of efficient structures often led to performance issues.
    6. Difficulties in Testing and Verification: The complex and intertwined nature of traditional programs made them difficult to test and verify thoroughly, increasing the likelihood of bugs and security vulnerabilities.
  • Modular Design Emphasis: The authors emphasize the importance of modular design in programming. This involves breaking down a program into smaller, manageable, and independently functioning modules, improving both understandability and maintainability.
  • Top-Down Approach: The book advocates for a top-down approach in software development, where complex problems are progressively broken down into simpler, more manageable sub-problems – a process referred to as “step-wise refinement”.
  • Control Structures over Goto Statements: A key point is the preference for using control structures like loops and conditional statements instead of goto statements. This shift is crucial for improving the structure and readability of code.
  • Foundation for Modern Programming: The ideas and principles presented in the book laid the groundwork for many of the programming techniques and best practices used today, marking a significant shift in the way software is written and understood. It’s impact especially on how the design of programming languages is approached has been profound.
    1. Object-Oriented Programming (OOP): While distinct from structured programming, OOP inherits its emphasis on modularity and organization. OOP structures programs around objects and their interactions, promoting code reusability and maintainability.
    2. Functional Programming: Emphasizing immutability and first-class functions, functional programming follows the principles of structured programming in creating clear and predictable code structures.
    3. Modular Programming: Directly influenced by the call for modularity in structured programming, this approach involves dividing a program into separate modules that can be developed, tested, and debugged independently.
    4. Refactoring: The practice of restructuring existing computer code without changing its external behavior. Structured programming’s emphasis on readability and maintainability laid the groundwork for the concept of refactoring.
    5. Test-Driven Development (TDD): This methodology, where tests are written before the code itself, aligns with the structured programming philosophy of clear, manageable code sections, facilitating easier testing.
    6. Agile Software Development: While Agile is a broader methodology, its emphasis on iterative development and adaptability echoes structured programming’s focus on manageable code chunks and responsiveness to change.
    7. Design Patterns: Many software design patterns that provide solutions to common software design problems are built on the principles of structured programming, emphasizing clarity, modularity, and maintainability.
    8. Integrated Development Environments (IDEs): The development of IDEs, which facilitate the programming process, reflects the structured programming emphasis on efficient and error-minimized coding environments.

Structured Programming significantly influenced the design and evolution of programming languages in several key ways:

  1. Introduction of Control Structures: One of the most direct impacts was the incorporation of control structures like loops (for, while) and conditionals (if, else) in programming languages. These structures replaced the chaotic and hard-to-follow goto statements, leading to more readable and maintainable code.
  2. Support for Modularity: The emphasis on modularity in structured programming led to language features that support the division of code into reusable modules or functions. This can be seen in the development of function-based languages and later in object-oriented languages where encapsulation and abstraction are key.
  3. Enhanced Syntax for Readability: Structured programming’s focus on readability influenced language syntax, leading to the design of more intuitive and human-readable languages. This is evident in the evolution from older languages like COBOL and FORTRAN to more modern languages like Python and Java.
  4. Type Systems and Error Checking: The structured approach to programming underscored the importance of reliability, which influenced the development of strong type systems and compile-time error checking in languages to catch errors early in the development cycle.
  5. Encouragement of Good Practices: The principles of structured programming fostered a mindset of writing clean, well-structured code, which influenced language design to encourage or enforce good programming practices, such as proper indentation and use of descriptive identifiers.
  6. Impact on Scripting Languages: Even scripting languages, designed for different purposes, incorporated principles of structured programming for better manageability of code, especially as scripts grew more complex.
  7. Paradigm Shift in Language Design Philosophy: Perhaps the most profound influence was a paradigm shift in how language designers approached the creation of new programming languages, prioritizing code readability, maintainability, and structure.

Through these influences, structured programming left an indelible mark on the landscape of programming languages, shaping the tools and methods used by generations of software developers.

It’s hard to believe, I know, but before books like Structured Programming were published, software development was a bit like the Wild West. I know, right? I can tell you’re shocked. How unlike the software development of today!

Programmers sort of made it up as they went along, and the notion of a design and development methodology, starting with desired system behaviour and working methodically (hence, “methodology”) down to individual units of code that work together to produce that behaviour, with an emphasis on how that code is organised to make it easy to test and easy to change later, was a big news flash to the two hundred thousand people in the world working as programmers.

We can find the DNA of Structured Programming in most modern approaches to software development: use cases, OOA/D (e.g., Ivar Jacobson’s highly influential Objectory methodology), CRC cards, TDD & BDD, Feature-Driven Development – the list goes on an on of approaches to design and development that start with a desired system outcome and end with modular, tested code that’s easy to understand and easy to change when it inevitably needs to.

Well, that would be nice, wouldn’t it?

The Bluffer’s Guide to Peopleware: Productive Projects and Teams

Our very highly distilled summaries of some of the most influential writings in software development – like books in rocket fuel form – continues with Peopleware: Productive Projects and Teams, by Tim Lister and Tom DeMarco, first published in 1987.

Peopleware emphasizes the human side of software development. It argues that the key to successful projects lies in managing and understanding people, rather than just focusing on technical aspects. The book discusses how organizational culture, teamwork, and work environment significantly impact productivity and project success. It advocates for creating a conducive work environment, fostering effective communication, and respecting the individuality and creativity of team members. The authors stress that managing the human element is critical for successful project management in the software industry.

Key Points

  1. Importance of People Over Processes and Tools: The book emphasizes that the success of projects is more dependent on the people involved than on the processes or tools used. Human factors such as motivation, team dynamics, and talent are crucial.
  2. The Role of Management: It highlights the role of management in creating a supportive environment. Managers should focus on eliminating demotivating factors and creating a nurturing environment rather than just supervising and controlling.
  3. The Work Environment: DeMarco and Lister discuss how the physical work environment, like noise and space, significantly affects productivity. They advocate for private, quiet workspaces to enhance focus and efficiency.
  4. Team Dynamics: The book emphasizes the importance of building jelled, cohesive teams. Such teams are more efficient, creative, and better at problem-solving. It discusses how to foster a sense of camaraderie and trust among team members.
  5. Communication: Effective communication within teams is highlighted as a key to success. The authors suggest that open, honest communication and regular interaction are vital for project success.
  6. Respecting Individuality: Recognizing and respecting the individual differences and strengths of team members is crucial. The book suggests that teams should leverage these individual talents rather than forcing uniformity.
  7. Avoiding Overtime: The book warns against the culture of excessive overtime. It argues that long hours can lead to burnout and reduced productivity, emphasizing the importance of a balanced work-life.
  8. Focusing on Quality of Work: The authors argue that focusing on the quality of work rather than the quantity can lead to better outcomes and more satisfied employees.

These points collectively underline the idea that understanding and managing the human element is as important, if not more, than the technical aspects in software project management.

Peopleware’s influence particularly on Agile Software Development is undeniable. The core Agile value “Individuals and interactions over processes and tools” has Peopleware written all over it. Self-organising teams, where developers make the key decisions themselves about how the team works, is also very Peopleware. Also, principles like “Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done”, together with the emphasis on face-to-face (or, these days, webcam to webcam) communication, working at a sustainable pace, and continued attention to technical excellence and good design.

Also very influential is the idea of leaders as people who create environments where teams can do their best work, as opposed to the traditional command-and-control model of a leader as someone who tells teams what to do. Although, while many leaders pay lip service to this mantra, in practice the traditional model still dominates. (And this is where my mantra of “If they won’t give you control, take it!” comes from. It’s rarely given freely, and the best developers seem to have a distinct knack for managing upwards.)

Peopleware dovetails beautifully with another seminal work, The Mythical Man-Month, in that nurturing effective teams gets way harder as the teams get bigger, and nigh-on impossible when there’s high churn within teams. Both books hint strongly at teams being the real assets in software organisations; something I vigorously agree with.

Most influential of all is the common belief these days that behind every technical or process problem in software development, there’s almost always a people problem.

The Bluffer’s Guide to The Mythical Man-Month

One thing that often strikes me when sitting in on client interviews is how many candidates lack an historical perspective on software development. In a profession of “perpetual beginners”, where new developers are lucky if they’re exposed to old developers conversant with old ideas, it’s understandable that so many of us think of what are, in fact, old ideas as shiny and new. The relentless churn of reinvention is like the winds that shift the sands, burying the edifices of previous civilisations, and leaving today’s developers under the mistaken impression they built their cities first.

I found it very valuable to look back over the history of software development, surprising myself at how so many key insights and ideas date back decades before I thought they did.

But that’s a lot of reading! So, for those of you who are too busy to do that legwork, I thought it might be useful to present summaries of some of those most influential books and papers in a more easily digestible form. Starting with one of the most seminal: The Mythical Man-Month, by Fred Brooks (published in 1975).

Brooks argues that adding manpower to a late software project makes it later, due to the increased complexity of communication. He emphasizes the importance of having a small, skilled team, clear communication, and the need for planning. Brooks introduces the concept of the “Brooks’ Law” and stresses the importance of conceptual integrity in design. He also discusses the challenges of software estimation, the trade-offs between quality and time, and the value of iteration in software development.

Key Concepts:

Brooks’ Law:

“Adding manpower to a late software project makes it later.”

This counter-intuitive principle is based on several key observations and reasons:

  1. Ramp-Up Time: New team members require time to become productive. They need to learn about the project, its code base, the tools being used, and the team’s working style. This ramp-up time can significantly slow down the overall progress as existing team members spend time training the newcomers instead of working on the project.
  2. Communication Overhead: As more people are added to a project, the complexity of communication increases exponentially. Every new team member adds additional communication channels, making coordination and information sharing more complex and time-consuming.
  3. Division of Labor: There’s a limit to how effectively a task can be partitioned among multiple workers. Some tasks simply cannot be divided because of their sequential nature, and for those that can be divided, the division itself can introduce extra work, such as integration and testing of the different parts. You may have come across Brooks’ famous analogy of expecting 9 women to produce a baby in one month. (And before you say it, it didn’t escape my attention that in the world of The Mythical Man-Month, the men do the work while the women make the babies. Hey, it was the 1970s. You’ve seen Life On Mars, right?)
  4. Diminishing Returns: After a certain point, the productivity per worker starts to decrease as the team size increases, due to the factors mentioned above. This can lead to a scenario where adding more people actually results in less overall productivity.

Brooks’ Law highlights the importance of careful team and project management in software development. It suggests that throwing more resources at a problem, especially in a time-constrained situation, is not always the best solution and can often exacerbate the problem. Instead, Brooks advocates for better planning, clear communication, and setting realistic timelines to avoid the pitfalls of late projects.

Software Estimation

Brooks highlights the inherent difficulties in accurately estimating the time and resources needed for software projects. He points out that optimistic assumptions, failure to account for all necessary tasks (like integration and testing), and the unpredictable nature of creative work like programming often lead to underestimation. Brooks suggests more realistic approaches to estimation, taking into account the uncertainties and complexities involved.

Quality vs. Time

The book discusses the trade-off between the quality of the software and the time taken to develop it. Brooks argues that rushing to meet deadlines often leads to compromises in software quality. He emphasizes the importance of not sacrificing quality for speed, as poor quality can lead to more significant issues and delays later, like increased maintenance costs and system failures. He advocates for a balanced approach where sufficient time is allocated to ensure high-quality outcomes.

Value of Iteration

Brooks promotes the concept of iterative development, which was a significant shift from the prevailing models of his time. He argues that software should be developed in stages, with each stage building on the previous one. This approach allows for continuous testing, feedback, and refinement, leading to a more robust and well-designed final product. Iterative development helps in identifying and fixing issues early in the process, reducing the overall risk and improving the software’s quality.

Conceptual Integrity in Design

This principle emphasizes the importance of having a coherent and cohesive design approach, ensuring that all components of the software work well together and adhere to a central concept. Achieving conceptual integrity often involves a strong guiding hand, such as a chief architect, to ensure that all aspects of the project align with the overarching design philosophy. This approach leads to software that is easier to understand, use, and maintain, as it avoids the complexity and confusion of mixed or conflicting designs.

Hopefully you can see how The Mythical Man-Month has profoundly influenced modern software development. For sure, some of these ideas have evolved and attitudes have changed over the years. It’s debatable whether a book called “The Mythical Man-Month” would get published in 2023, for example. Also, TMM-M dates from a time when organisations were much more hierarchical and less collaborative.

We tend to recommend much, much faster iteration these days than perhaps Brooks and others of his time imagined. (TBF, our computers are about a million times faster, so the inner loop of build & test is much, much shorter these days.) And I may not agree with Brooks’ take on the need for something like a Chief Architect (there are other, arguably better, ways to keep a team singing from the same hymn sheet). Finally, in 2023, many of us now see estimation as solving the wrong problem, preferring instead to deliver working software more often to make real progress more transparent. When trains leave every 5 minutes, the timetable becomes less important.

But the key concepts are mostly the same. If you follow me on social media, you will have seen me espouse “small, skilled teams rapidly and sustainably evolving working software” and championing a focus on quality – continuous testing – as a means to achieving that.

It’s a book every developer and manager of developers should read. And now you can pretend that you have. It’ll be our little secret 😉