The Bluffer’s Guide to Principles of Software Engineering Management

If you’ve ever had the misfortune to be managed by me, you can thank one Tom Gilb for inspiring my habit of asking “What problem does this solve?” You may know from my writings, too, that I’m a very problem-oriented (or “outcome-oriented”, if you prefer) software developer. That, too, stems from my stumbling across Mr Gilb’s work early in my career (and Mr Gilb himself later on).

Principles of Software Engineering Management by Tom Gilb, published in 1988, is a seminal work in software engineering. It introduces the evolutionary development model, emphasizing iterative progress through small, manageable steps. Gilb advocates for quantifiable goals and metrics to guide software development, ensuring quality and efficiency. He underscores the importance of involving all stakeholders, including clients, in the development process for better outcomes. The book is known for its practical approach, blending management strategies with software engineering practices.

If that sounds at all familiar, you wouldn’t be the first person to ask if Gilb’s goal-seeking, evolutionary approach to software development was the forerunner to what we know today as Agile Software Development. And the answer is most definitely “yes”.

Key Points

  • Evolutionary Development: Advocates for an iterative and incremental approach to software development, emphasizing continuous improvement.
  • Quantifiable Goals and Metrics: Stresses the importance of setting clear, measurable objectives for software projects to ensure quality and effectiveness.
  • Stakeholder Involvement: Emphasizes the need for involving all stakeholders, including clients, in the development process to align outcomes with user needs and expectations.
  • Early and Continuous Delivery: Encourages frequent delivery of working software to get early feedback and facilitate better end results.
  • Adaptability and Flexibility: Promotes adaptability in the development process, allowing for changes and revisions based on feedback and changing requirements.
  • Risk Management: Highlights the importance of identifying and managing risks early in the software development life cycle.
  • Effective Communication: Underlines the necessity of clear, consistent communication among team members and with stakeholders to ensure project success.
  • Team Empowerment: Advocates for empowering development teams, giving them the autonomy to make decisions and solve problems creatively.
  • Quality Focus: Prioritizes high-quality outputs, with an emphasis on robust testing and validation throughout the development process.
  • Efficiency in Resource Management: Stresses efficient use of resources, including time and budget, to maximize productivity and value.

If you follow me on social media, you may have have seen me argue that “productivity” in software development is essentially the value a team creates for each dollar spent, and that “value” is very much in the eye of the beholder. Could be increased revenue, could be decreased costs, could be Oscars won, could be lives saved. It’s not really up to us, and PoSEM was the book that opened my eyes to the need to work closely with stakeholders to build an understanding of what has value to them.

Reframing development as an iterative process of solving problems (as opposed to delivering solutions – a subtle but profound distinction) can have a big impact on teams. Gilb once told me about a client of his who was investing millions in a system to more effectively prioritise email communications so that the top brass saw all the important stuff without it being buried under a mountain of trivial memos and birthday announcements. The project was running far behind schedule and was way over budget. Tom asked that simple question: “What problem are you trying to solve?” It had evidently been a long time since anyone thought about that. It’s extremely common for teams to lose sight of the end goal and for the work to become an end in itself. As one senior manager in the UK Post Office once told me after I asked what the end goal of his project was, “The end goal is to stick to the plan”. Anyway, it turned out their existing email solution could be configured to do what they needed quite easily. Problem solved!

There’s been many times in my career when taking a step back and asking “What problem are we trying to solve?” has precipitated dramatic changes of direction towards much more fruitful – and far quicker and cheaper – outcomes. It’s the most powerful question in software development.

Although sadly out of print, you can pick up second-hand copies of Principles of Software Engineering Management for very little. I highly recommend giving it a look if you’re curious about the historical context of Agile and Lean, and are interested in how to shift your focus from building products to solving problems. For all those folks who harp on about delivering “value”, this is one of very few books in our profession that talks about what that actually means.

The Bluffer’s Guide to A Spiral Model of Software Development and Enhancement

When we’re talking about iterative and incremental software development, it’s easy to forget that it wasn’t all that long ago – well, okay, if you’re my age it doesn’t seem all that long ago – when that wasn’t really a thing.

In the formative years of our profession, in the 1970s, there were three schools of thought that dominated the software design and development methodology landscape:

The “Waterfall” Model – a sequence of development activities (requirements, design, coding, testing, operations), with some basic level of iteration built in to allow us to revisit earlier activities (“do it twice”)

Structured Programming – often seen working with the Waterfall model in the 1970s (the Martin Scorsese and Robert De Niro of software methodology)

Rapid Prototyping – a highly iterative, customer-centric approach to design that used a series of low-fi prototypes to pin down the details before “building it properly”.

In his seminal 1986 paper, A Spiral Model of Software Development and Enhancement, Barry Boehm discusses the drawbacks and risks of these dominant approaches, and then proposes a new model of the software development life cycle that incorporates their strengths and addresses their weaknesses.

This model emphasizes iterative development, risk analysis, and prototyping. It’s designed to combine elements of both the waterfall model and prototyping approaches, ensuring a systematic, but flexible software development process. The Spiral Model’s key feature is its focus on early identification and reduction of project risks, and it allows for incremental refinement of the product through successive spirals or phases. This approach has had a significant influence on how software development is approached, especially in complex, high-risk projects.

Key Points

  1. Introduction of the Spiral Model: Presents a new approach to software development, combining elements from the traditional Waterfall Model and prototyping, aiming for a systematic yet adaptable process.
  2. Iterative Development: The model emphasizes iterative cycles of development, where each cycle includes stages of planning, risk analysis, engineering, and evaluation.
  3. Risk Analysis: Central to the model, it focuses on the early identification and continuous management of risks throughout the project’s lifecycle.
  4. Prototyping: Utilizes prototyping extensively for refining requirements and design, allowing for user feedback and system adaptation.
  5. Phased Approach: Breaks down development into smaller segments, each encompassing objective setting, risk assessment, development, and validation.
  6. User Involvement: Stresses high levels of customer involvement to ensure that the evolving system aligns with user needs.
  7. Continuous Testing: In the Spiral Model, testing is an integral and continuous activity that occurs throughout the development process.
  8. Flexibility and Adaptability: Highlights the model’s adaptability to changes and varying risk profiles, making it suitable for large, complex, and high-risk projects.
  9. Comparison with Other Models: Discusses advantages over traditional models like Waterfall, particularly in addressing uncertainty and risk.
  10. Implementation Guidelines: Provides practical guidance for implementing the Spiral Model, emphasizing expert judgment in risk evaluation.

The influence of Boehm’s paper can be seen in pretty much all of the software development methodologies that followed. Anybody that proposes not approaching development in an iterative and incremental manner these days will be asked to hand in their methodologist badge and gun. Software development since 1986 has mostly been variations on the Spiral Model theme. Some approaches may take it to eXtremes, of course. But that’s another Bluffer’s Guide for another day.

The Bluffer’s Guide to Managing The Development of Large Software Systems

In today’s super-compact summary of a seminal work in software development, we’re going to open the can of worms that is Winston Royce’s 1970 paper Managing The Development of Large Software Systems.

Royce’s paper is often credited with introducing the Waterfall model of software development. Royce outlines a sequential design process with distinct phases such as requirements, design, implementation, verification, and maintenance. He emphasizes the importance of thorough documentation, rigorous step-by-step development, and early stage planning.

However, Royce also identifies the obvious shortcomings of this model, particularly its inflexibility and the difficulty of accommodating changes after the process begins. The paper has had a significant impact on software development methodologies, shaping how large-scale projects are approached.

It is often misinterpreted as an endorsement of the Waterfall model. In reality, Royce did not advocate for the Waterfall model in its strict, linear form. While he presented what later became known as the Waterfall model (he never named it as such in his paper, but others used “Waterfall” to describe this linear model later – it’s kind of like “flying saucers” in that respect), he actually highlighted its limitations and risks, particularly the challenge of accommodating changes once the project is underway.

Royce’s key argument was for the incorporation of iterative elements, such as feedback loops and overlapping development phases, to address these shortcomings. He emphasized the need for more flexibility and adaptability in the software development process. Thus, his paper, often cited as the origin of the Waterfall model, was more a critique and a call for a more iterative approach than an endorsement of the rigid, linear process that the Waterfall model is known for.

In the decades that followed, many managers continued to misinterpret the intention of the paper, forcing teams to attempt the impossible – i.e., get it right first time. And in this sense, Royce’s paper has unintentionally caused considerable misery for countless thousands of development teams. Those managers, of course, probably never actually read the paper. Hence my parallel with “flying saucers” (the private pilot whose sighting in 1947 of unidentified aircraft moving at supersonic speeds never said they looked like saucers. But after news outlets reported them as “flying saucers”, suddenly everybody was reporting saucer-shaped craft.)

There are still many, many teams out there looking for “flying saucers” when in reality no such thing likely exists. In practice, we never get it right first time – we always have to iterate. The main difference being that teams who claim to be following a Waterfall process have a first iteration that’s reeeeaaally long, and then in the run-up to a release (the “testing phase”), they start iterating rapidly to “get the ball in the hole”. In reality, there’s no such thing as “Waterfall” software development.

Key Points

  1. Sequential Phases: Royce described a sequential approach to software development with distinct phases: requirements specification, design, coding, testing, and operations.
  2. Documentation Importance: He emphasized the importance of documentation at each phase for effective communication and coordination.
  3. Early Stage Planning: Detailed planning in the early stages of the project was highlighted as crucial for success.
  4. Testing and Debugging: The paper stressed the importance of thorough testing and debugging in the later stages of development.
  5. Client Involvement: Royce advocated for involving the client at early stages and maintaining constant communication.
  6. Model Limitations: He identified limitations of the sequential model, particularly its inflexibility and the difficulty in accommodating changes.
  7. Iterative Elements: Royce recommended iterative elements, like overlapping development phases and prototyping, to address these limitations.
  8. Risk of Sequential Approach: The paper pointed out the risks associated with a purely sequential approach, especially for large, complex projects.
  9. Critique of Linear Models: Royce’s critique of linear, non-iterative models underlined the need for more adaptable and flexible methodologies in software development.

While Royce does indeed advocate for a more iterative approach to development, it’s doubtful he ever envisaged the level of iteration that’s been favoured since (he talks in the paper about “doing it twice” – yeah, and then do it some more!). And he still places great emphasis on the importance of planning and documentation: two activities we now know have limited value compared to feedback and active collaboration.

But for those who actually read the paper, there’s no denying it would have been a light bulb moment back then. Its influence in later thinking about the software development life cycle (e.g., Barry Boehm’s Spiral Model, proposed in 1986) is apparent. At the very least, “Managing The Development of Large Software Systems” started a conversation that’s still going on today.

The Bluffer’s Guide to The Design of Everyday Things

I continue my series of hyper-distilled summaries of seminal works in the field of software development with a book that doesn’t actually come from the field of software development, but that’s had a huge impact on the way we design user interfaces.

The Design of Everyday Things by Donald A. Norman, published in 1988, focuses on the design principles of everyday objects, emphasizing user-friendly and intuitive interfaces. Norman introduces concepts like affordances, signifiers, and feedback, which have become fundamental in software design. His insistence on user-centered design reshaped how designers approach software interfaces, prioritizing ease of use and user experience. The book’s influence is evident in the emphasis on usability in modern software development, leading to more intuitive and accessible digital products. Norman’s work highlights the importance of understanding the user’s perspective in the design process, making it a cornerstone in the field of human-computer interaction.

Key Points

  • Affordances: Objects should indicate how they can be used. An affordance is a quality of an object that suggests its function, like a handle on a door.
  • Signifiers: These are signals or symbols that indicate what action to take, like push or pull labels on doors.
  • Mapping: The relationship between controls and their effects should be clear and logical. For example, a switch designed for a light should be positioned in a way that intuitively suggests which light it controls.
  • Feedback: Users should receive immediate and clear feedback from their actions. For instance, when a button is pressed, there should be an indication that it has been activated.
  • Constraints: Design should limit the actions that can be performed, preventing error. For example, a USB plug that only fits one way prevents incorrect insertion.
  • Error Tolerance and Recovery: Designs should anticipate possible errors and allow for easy recovery from them, minimizing the cost of mistakes.
  • User-Centered Design: Focus on the needs, abilities, and limitations of the user. Design should not force users to adapt to the system; rather, systems should be built to suit users.
  • Conceptual Models: Users should be able to form a good mental model of how a system works to use it effectively.

Norman’s book revolutionized how designers think about user interaction, emphasizing the importance of understanding the user’s perspective and designing intuitive, user-friendly interfaces. These principles, while illustrated through everyday objects, have been widely applied in software design, enhancing usability and user experience in digital products.

  1. Affordances
    • Example: In a drawing app, a pencil icon represents a tool for drawing lines. The icon’s appearance suggests its function, making it intuitive for users to understand and use.
  2. Signifiers
    • Example: In web forms, asterisks (*) next to certain fields signify that they are required. This helps guide the user in completing the form correctly.
  3. Mapping
    • Example: In a music player application, the volume slider moves from left (low volume) to right (high volume), mapping spatially to the concept of increasing and decreasing volume.
  4. Feedback
    • Example: When a user submits a form on a website, a message appears confirming the submission or indicating errors to be corrected. This feedback helps users understand the result of their action.
  5. Constraints
    • Example: In an online payment form, the credit card number field only allows the entry of numbers and automatically formats them into groups of four, preventing typing errors.
  6. Error Tolerance and Recovery
    • Example: An email client auto-saves drafts, so if the application crashes or the user accidentally closes it, the written email isn’t lost and can be easily recovered.
  7. User-Centered Design
    • Example: A ride-sharing app uses large buttons, clear labels, and a simple interface to accommodate users who might not be tech-savvy, ensuring the app is accessible to a wide range of users.
  8. Conceptual Models
    • Example: A file management system uses a ‘folder’ metaphor, where files can be ‘placed’ into folders. This model is easy for users to understand as it mimics the physical filing systems familiar to them.

Each of these principles helps to create more intuitive, user-friendly software interfaces, improving the overall user experience and making technology more accessible and effective.

If you’re familiar with the work of usability expert Jakob Nielsen, you may have noticed a similarity between Norman’s principles of user-centred design and Nielsen’s user interface design heuristics. Indeed, Nielsen and Norman formed a consulting company together – the Nielsen Norman Group – to offer training and guidance on user experience design. It’s a small world!

It’s easy to forget just how little attention was paid to the design of user interfaces until GUIs became popular in the 1980s. For many, user experience happened at a command line or on a green screen.

And it’s frustrating to see how quickly we forgot much of what we’d learned about human-computer interaction (HCI) design in the GUI age when the dotcom era exploded, and user experience was reinvented as an interactive document design discipline. Today, it’s not at all uncommon to have to deal with sucky user experiences that look amazing.

These principles are as relevant today as they were in 1988 – and not just to software – and arguably should be considered part of a developer’s arsenal of software design skills. How many user experiences could be dramatically improved? Yeah, I’m looking at you, Open Source Software!

The Bluffer’s Guide to Structured Programming

My series of hyper-distilled summaries of some of the most influential books and papers about software development continues with a work that has had a wide influence not just on the way we write code today, but on the tools and technologies we use to write it.

Structured Programming by Ole-Johan Dahl, Edsger W. Dijkstra, and C. A. R. Hoare is a seminal work in computer science published in 1972 that advocates for the use of structured programming techniques to improve software reliability and clarity. The book criticizes traditional programming practices for their complexity and error-proneness, and promotes modular design, top-down approach, and the use of control structures like loops and conditionals instead of goto statements. It significantly influenced software development methodologies, leading to more maintainable, efficient, and understandable code, laying a foundation for modern programming practices.

Key Points

  • Critique of Traditional Programming: It critically assesses the traditional ad hoc programming practices that were common before its publication. These practices were often complex and led to error-prone code.
    1. Overuse of Goto Statements: Traditional programming often relied heavily on the goto statement, leading to what Dijkstra famously described as “spaghetti code” – complex and tangled code structures that were difficult to follow and maintain.
    2. Lack of Modularity: Older programming methods typically didn’t emphasize breaking down a program into distinct, reusable modules. This lack of modularity resulted in code that was hard to understand, debug, and modify.
    3. Poor Readability and Maintainability: Traditional code, with its complex flow and lack of structure, was often hard to read and understand. This complexity made maintenance a challenging and error-prone task.
    4. Ad-hoc Development Approach: Earlier programming practices did not always follow a systematic approach, leading to ad-hoc solutions that could be inefficient and unreliable.
    5. Error-Prone and Inefficient: Without structured methods, programs were more susceptible to errors, and debugging was more challenging. The lack of efficient structures often led to performance issues.
    6. Difficulties in Testing and Verification: The complex and intertwined nature of traditional programs made them difficult to test and verify thoroughly, increasing the likelihood of bugs and security vulnerabilities.
  • Modular Design Emphasis: The authors emphasize the importance of modular design in programming. This involves breaking down a program into smaller, manageable, and independently functioning modules, improving both understandability and maintainability.
  • Top-Down Approach: The book advocates for a top-down approach in software development, where complex problems are progressively broken down into simpler, more manageable sub-problems – a process referred to as “step-wise refinement”.
  • Control Structures over Goto Statements: A key point is the preference for using control structures like loops and conditional statements instead of goto statements. This shift is crucial for improving the structure and readability of code.
  • Foundation for Modern Programming: The ideas and principles presented in the book laid the groundwork for many of the programming techniques and best practices used today, marking a significant shift in the way software is written and understood. It’s impact especially on how the design of programming languages is approached has been profound.
    1. Object-Oriented Programming (OOP): While distinct from structured programming, OOP inherits its emphasis on modularity and organization. OOP structures programs around objects and their interactions, promoting code reusability and maintainability.
    2. Functional Programming: Emphasizing immutability and first-class functions, functional programming follows the principles of structured programming in creating clear and predictable code structures.
    3. Modular Programming: Directly influenced by the call for modularity in structured programming, this approach involves dividing a program into separate modules that can be developed, tested, and debugged independently.
    4. Refactoring: The practice of restructuring existing computer code without changing its external behavior. Structured programming’s emphasis on readability and maintainability laid the groundwork for the concept of refactoring.
    5. Test-Driven Development (TDD): This methodology, where tests are written before the code itself, aligns with the structured programming philosophy of clear, manageable code sections, facilitating easier testing.
    6. Agile Software Development: While Agile is a broader methodology, its emphasis on iterative development and adaptability echoes structured programming’s focus on manageable code chunks and responsiveness to change.
    7. Design Patterns: Many software design patterns that provide solutions to common software design problems are built on the principles of structured programming, emphasizing clarity, modularity, and maintainability.
    8. Integrated Development Environments (IDEs): The development of IDEs, which facilitate the programming process, reflects the structured programming emphasis on efficient and error-minimized coding environments.

Structured Programming significantly influenced the design and evolution of programming languages in several key ways:

  1. Introduction of Control Structures: One of the most direct impacts was the incorporation of control structures like loops (for, while) and conditionals (if, else) in programming languages. These structures replaced the chaotic and hard-to-follow goto statements, leading to more readable and maintainable code.
  2. Support for Modularity: The emphasis on modularity in structured programming led to language features that support the division of code into reusable modules or functions. This can be seen in the development of function-based languages and later in object-oriented languages where encapsulation and abstraction are key.
  3. Enhanced Syntax for Readability: Structured programming’s focus on readability influenced language syntax, leading to the design of more intuitive and human-readable languages. This is evident in the evolution from older languages like COBOL and FORTRAN to more modern languages like Python and Java.
  4. Type Systems and Error Checking: The structured approach to programming underscored the importance of reliability, which influenced the development of strong type systems and compile-time error checking in languages to catch errors early in the development cycle.
  5. Encouragement of Good Practices: The principles of structured programming fostered a mindset of writing clean, well-structured code, which influenced language design to encourage or enforce good programming practices, such as proper indentation and use of descriptive identifiers.
  6. Impact on Scripting Languages: Even scripting languages, designed for different purposes, incorporated principles of structured programming for better manageability of code, especially as scripts grew more complex.
  7. Paradigm Shift in Language Design Philosophy: Perhaps the most profound influence was a paradigm shift in how language designers approached the creation of new programming languages, prioritizing code readability, maintainability, and structure.

Through these influences, structured programming left an indelible mark on the landscape of programming languages, shaping the tools and methods used by generations of software developers.

It’s hard to believe, I know, but before books like Structured Programming were published, software development was a bit like the Wild West. I know, right? I can tell you’re shocked. How unlike the software development of today!

Programmers sort of made it up as they went along, and the notion of a design and development methodology, starting with desired system behaviour and working methodically (hence, “methodology”) down to individual units of code that work together to produce that behaviour, with an emphasis on how that code is organised to make it easy to test and easy to change later, was a big news flash to the two hundred thousand people in the world working as programmers.

We can find the DNA of Structured Programming in most modern approaches to software development: use cases, OOA/D (e.g., Ivar Jacobson’s highly influential Objectory methodology), CRC cards, TDD & BDD, Feature-Driven Development – the list goes on an on of approaches to design and development that start with a desired system outcome and end with modular, tested code that’s easy to understand and easy to change when it inevitably needs to.

Well, that would be nice, wouldn’t it?

The Bluffer’s Guide to Peopleware: Productive Projects and Teams

Our very highly distilled summaries of some of the most influential writings in software development – like books in rocket fuel form – continues with Peopleware: Productive Projects and Teams, by Tim Lister and Tom DeMarco, first published in 1987.

Peopleware emphasizes the human side of software development. It argues that the key to successful projects lies in managing and understanding people, rather than just focusing on technical aspects. The book discusses how organizational culture, teamwork, and work environment significantly impact productivity and project success. It advocates for creating a conducive work environment, fostering effective communication, and respecting the individuality and creativity of team members. The authors stress that managing the human element is critical for successful project management in the software industry.

Key Points

  1. Importance of People Over Processes and Tools: The book emphasizes that the success of projects is more dependent on the people involved than on the processes or tools used. Human factors such as motivation, team dynamics, and talent are crucial.
  2. The Role of Management: It highlights the role of management in creating a supportive environment. Managers should focus on eliminating demotivating factors and creating a nurturing environment rather than just supervising and controlling.
  3. The Work Environment: DeMarco and Lister discuss how the physical work environment, like noise and space, significantly affects productivity. They advocate for private, quiet workspaces to enhance focus and efficiency.
  4. Team Dynamics: The book emphasizes the importance of building jelled, cohesive teams. Such teams are more efficient, creative, and better at problem-solving. It discusses how to foster a sense of camaraderie and trust among team members.
  5. Communication: Effective communication within teams is highlighted as a key to success. The authors suggest that open, honest communication and regular interaction are vital for project success.
  6. Respecting Individuality: Recognizing and respecting the individual differences and strengths of team members is crucial. The book suggests that teams should leverage these individual talents rather than forcing uniformity.
  7. Avoiding Overtime: The book warns against the culture of excessive overtime. It argues that long hours can lead to burnout and reduced productivity, emphasizing the importance of a balanced work-life.
  8. Focusing on Quality of Work: The authors argue that focusing on the quality of work rather than the quantity can lead to better outcomes and more satisfied employees.

These points collectively underline the idea that understanding and managing the human element is as important, if not more, than the technical aspects in software project management.

Peopleware’s influence particularly on Agile Software Development is undeniable. The core Agile value “Individuals and interactions over processes and tools” has Peopleware written all over it. Self-organising teams, where developers make the key decisions themselves about how the team works, is also very Peopleware. Also, principles like “Build projects around motivated individuals. Give them the environment and support they need, and trust them to get the job done”, together with the emphasis on face-to-face (or, these days, webcam to webcam) communication, working at a sustainable pace, and continued attention to technical excellence and good design.

Also very influential is the idea of leaders as people who create environments where teams can do their best work, as opposed to the traditional command-and-control model of a leader as someone who tells teams what to do. Although, while many leaders pay lip service to this mantra, in practice the traditional model still dominates. (And this is where my mantra of “If they won’t give you control, take it!” comes from. It’s rarely given freely, and the best developers seem to have a distinct knack for managing upwards.)

Peopleware dovetails beautifully with another seminal work, The Mythical Man-Month, in that nurturing effective teams gets way harder as the teams get bigger, and nigh-on impossible when there’s high churn within teams. Both books hint strongly at teams being the real assets in software organisations; something I vigorously agree with.

Most influential of all is the common belief these days that behind every technical or process problem in software development, there’s almost always a people problem.

The Bluffer’s Guide to The Mythical Man-Month

One thing that often strikes me when sitting in on client interviews is how many candidates lack an historical perspective on software development. In a profession of “perpetual beginners”, where new developers are lucky if they’re exposed to old developers conversant with old ideas, it’s understandable that so many of us think of what are, in fact, old ideas as shiny and new. The relentless churn of reinvention is like the winds that shift the sands, burying the edifices of previous civilisations, and leaving today’s developers under the mistaken impression they built their cities first.

I found it very valuable to look back over the history of software development, surprising myself at how so many key insights and ideas date back decades before I thought they did.

But that’s a lot of reading! So, for those of you who are too busy to do that legwork, I thought it might be useful to present summaries of some of those most influential books and papers in a more easily digestible form. Starting with one of the most seminal: The Mythical Man-Month, by Fred Brooks (published in 1975).

Brooks argues that adding manpower to a late software project makes it later, due to the increased complexity of communication. He emphasizes the importance of having a small, skilled team, clear communication, and the need for planning. Brooks introduces the concept of the “Brooks’ Law” and stresses the importance of conceptual integrity in design. He also discusses the challenges of software estimation, the trade-offs between quality and time, and the value of iteration in software development.

Key Concepts:

Brooks’ Law:

“Adding manpower to a late software project makes it later.”

This counter-intuitive principle is based on several key observations and reasons:

  1. Ramp-Up Time: New team members require time to become productive. They need to learn about the project, its code base, the tools being used, and the team’s working style. This ramp-up time can significantly slow down the overall progress as existing team members spend time training the newcomers instead of working on the project.
  2. Communication Overhead: As more people are added to a project, the complexity of communication increases exponentially. Every new team member adds additional communication channels, making coordination and information sharing more complex and time-consuming.
  3. Division of Labor: There’s a limit to how effectively a task can be partitioned among multiple workers. Some tasks simply cannot be divided because of their sequential nature, and for those that can be divided, the division itself can introduce extra work, such as integration and testing of the different parts. You may have come across Brooks’ famous analogy of expecting 9 women to produce a baby in one month. (And before you say it, it didn’t escape my attention that in the world of The Mythical Man-Month, the men do the work while the women make the babies. Hey, it was the 1970s. You’ve seen Life On Mars, right?)
  4. Diminishing Returns: After a certain point, the productivity per worker starts to decrease as the team size increases, due to the factors mentioned above. This can lead to a scenario where adding more people actually results in less overall productivity.

Brooks’ Law highlights the importance of careful team and project management in software development. It suggests that throwing more resources at a problem, especially in a time-constrained situation, is not always the best solution and can often exacerbate the problem. Instead, Brooks advocates for better planning, clear communication, and setting realistic timelines to avoid the pitfalls of late projects.

Software Estimation

Brooks highlights the inherent difficulties in accurately estimating the time and resources needed for software projects. He points out that optimistic assumptions, failure to account for all necessary tasks (like integration and testing), and the unpredictable nature of creative work like programming often lead to underestimation. Brooks suggests more realistic approaches to estimation, taking into account the uncertainties and complexities involved.

Quality vs. Time

The book discusses the trade-off between the quality of the software and the time taken to develop it. Brooks argues that rushing to meet deadlines often leads to compromises in software quality. He emphasizes the importance of not sacrificing quality for speed, as poor quality can lead to more significant issues and delays later, like increased maintenance costs and system failures. He advocates for a balanced approach where sufficient time is allocated to ensure high-quality outcomes.

Value of Iteration

Brooks promotes the concept of iterative development, which was a significant shift from the prevailing models of his time. He argues that software should be developed in stages, with each stage building on the previous one. This approach allows for continuous testing, feedback, and refinement, leading to a more robust and well-designed final product. Iterative development helps in identifying and fixing issues early in the process, reducing the overall risk and improving the software’s quality.

Conceptual Integrity in Design

This principle emphasizes the importance of having a coherent and cohesive design approach, ensuring that all components of the software work well together and adhere to a central concept. Achieving conceptual integrity often involves a strong guiding hand, such as a chief architect, to ensure that all aspects of the project align with the overarching design philosophy. This approach leads to software that is easier to understand, use, and maintain, as it avoids the complexity and confusion of mixed or conflicting designs.

Hopefully you can see how The Mythical Man-Month has profoundly influenced modern software development. For sure, some of these ideas have evolved and attitudes have changed over the years. It’s debatable whether a book called “The Mythical Man-Month” would get published in 2023, for example. Also, TMM-M dates from a time when organisations were much more hierarchical and less collaborative.

We tend to recommend much, much faster iteration these days than perhaps Brooks and others of his time imagined. (TBF, our computers are about a million times faster, so the inner loop of build & test is much, much shorter these days.) And I may not agree with Brooks’ take on the need for something like a Chief Architect (there are other, arguably better, ways to keep a team singing from the same hymn sheet). Finally, in 2023, many of us now see estimation as solving the wrong problem, preferring instead to deliver working software more often to make real progress more transparent. When trains leave every 5 minutes, the timetable becomes less important.

But the key concepts are mostly the same. If you follow me on social media, you will have seen me espouse “small, skilled teams rapidly and sustainably evolving working software” and championing a focus on quality – continuous testing – as a means to achieving that.

It’s a book every developer and manager of developers should read. And now you can pretend that you have. It’ll be our little secret 😉

The Illusion Of Developer “Productivity” Opens The Door To Snake Oil

A footballer’s productivity is ultimately measured by how many goals the team scores

There’s been much talk about measuring the productivity of software developers, triggered by a report from management consultants McKinsey claiming to have succeeded where countless others over many decades have failed.

I’m not going to dwell on the contents, as I prefer not to flatter it with that kind of scrutiny. Suffice to say, their ideas are naive at best. File in the usual place with your used egg shells and empty milk cartons.

What McKinsey’s take suffers from is very, very common: they’ve mistaken activity for outcomes. Activity is easy to measure at an individual level, outcomes not so much. In fact, outcomes are often difficult to quantify at all.

My first observation is that it’s not individual developers who produce outcomes; outcomes are achieved by the team. Just as there are players in a football team who usually don’t score goals, but without them fewer goals would be scored, there are usually people in a dev team who would be viewed as “unproductive” by McKinsey’s yardstick, but without whom the team as a whole would achieve much less. (See Dan North’s brilliant skewering of their metrics using the very highly valuable Tim McKinnon – and I know, because I’ve worked with him – as the example.)

My second observation is that outcomes in software development are rarely what they seem. Is our goal really to deliver code? Or is it to solve customers’ problems? Think of a doctor; is their goal to deliver treatments, or is to make us better?

We in software, sadly, tend to be in the treatments business, not in the patients business. We’re Big Pharma. And in the same way that Big Pharma invests massively in persuading us that we have the illness their potion cures, we have a tendency to try to get the customer’s problem to fit our solution. And so it is that “productivity” tends to be about the potion, and not the patient.

And so I wholeheartedly reject this individualist, mechanistic approach to measuring developer productivity. It’s nonsense. But I can understand why the idea appeals to managers in particular. The Illusion of ControlTM has a strong pull in a situation where, in reality, managers have no real control beyond what to fund and what not to fund, and who to hire and who to fire. Who wouldn’t want those decisions to appear empirical and rational, and not the gambles they actually are?

But more important to me is how this illusion can impact the very real business of solving customers’ problems with software. When all our focus is on potions and not patients, it’s easy for Snake Oil to creep in to the process.

At time of writing, there’s much talk and incredible hype about one particular snake oil that promises much but, as far as I’ve managed to see with concrete examples that can be verified, delivers little to nothing for patients: Large Language Models.

Code generation using LLMs like ChatGPT is, like all generative A.I., impressive but wrong. Having spent more than one hundred hours experimenting with GPT-4 and trying to replicate some of the claims people are making, I’ve seen how the illusion of productivity can suck us in. Yes, you are creating code faster. No, that code doesn’t work a lot of the time.

But if we measure our productivity by “how far we kick the ball” instead of “how many goals the team scores”, that can seem like a Win. It falls into the same trap that thinking skimping on developer testing – or skipping it altogether – helps us deliver sooner. Deliver what, exactly? Bugs?

On their website, GitHub claim that 88% of developers using Copilot feel more productive. But what percentage of developers also feel that skipping some developer testing helps them deliver working software sooner. I could take a wild guess at somewhere in the ballpark of 88%, perhaps.

They did a study, of course. (There’s always a study!) They tasked developers with writing a web server in JavaScript from scratch, some using Copilot, some doing it all by hand. And lo and behold, the developers who used Copilot completed that task in 55% less time. Isn’t it marvellous how vendor-funded studies always seem to back up their claims?

But let’s look a little closer, shall we? First of all, since when were customer requirements like “Write me a web server”? A typical software system used in business, for example, will have complex rules that are usually not precisely defined up front, but rather discovered through customer feedback. And in that sense, how quickly we converge on a working solution will depend heavily on iterating, and on our ability to evolve the code. This wasn’t part of their exercise.

Also, ahem.. If there was any problem that Copilot was probably already trained on, it’s JavaScript web servers. People have noted how good GPT-4 is at solving online coding problems that were published before the training data cut-off date, but not so hot at solving problems published after that. I’d like to see how it performs on a novel problem. (In my own experiments with it, poorly.)

And two more observations:

First, this study focuses on developers working alone for a relatively short amount of time. Let’s see how it performs when a team is working on a more complex problem -each working on their own part of the system – for several days. That’s a lot of rapidly-changing context for an LLM. It’s easy to fool ourselves into believing something makes us better at running marathons because it helped us run the 100m dash faster.

Secondly, GitHub’s musings on measuring developer productivity suffer a very similar “potions over patients” bias to the McKinsey report.

And vendors have a very real incentive to want us to believe that the big problems in software development can be solved with their tools.

Given the very high stakes for our industry – probably visible from space by now – I think it would be useful to see bigger, wider and more realistic studies of the impact of tools like Copilot on the capability of teams to solve real customer problems. As with almost every super-duper-we’ll-never-be-poor-or-hungry-again CASE tool revolution that’s come before, I suspect the answer will be “none at all”. But you can’t charge $19 a month for “none at all”. (Well, okay, you can. Just as long as there are enough people out there who focus on potions instead of patients.)

But here’s the thing: I suspect bigger, wider, longer, more realistic studies of the impact on development team productivity might reveal simply that we still don’t know what that means.

GitHub Copilot – Productivity Boon or Considered Harmful?

We need to talk about GitHub Copilot. This is the ML-driven programming tool – powered by OpenAI Codex – that Microsoft is promoting as “Your AI pair programmer”, and which they claim “works alongside you directly in your editor, suggesting whole lines or entire functions for you.”

Now, full disclaimer: I’ve not been able to try the Copilot Beta yet – there’s a waiting list – so my thoughts are based purely on what I’ve read about it, and what I’ve seen of it in demonstration videos by people who’ve tried.

At first glance, Copilot looks very impressive. You can, for example, just declare a descriptive function or method name, and it will suggest a matching implementation. Or you can write a comment about what you want the code to do, and it will generate it for you.

All the examples I’ve seen were for well-defined, self-contained problems – “calculate a square root”, “find the lowest number” and so on. I’ve yet to see it handle more complex problems like “send an SMS message to this number when a product is running low on stock”.

Copilot was trained on GitHub’s enormous wealth of other people’s code. This in itself is contentious, because when it autosuggests a solution, that might be your code that it’s reproducing without any license. Much has been made of the legality and the ethics of this in the tech press and on social media, so I don’t want to go into that here.

As someone who trains and coaches teams in code craft, though, I have other concerns about Copilot.

My chief concern is this: what Copilot does, to all intents and purposes, is copy and paste code off the Internet. As the developers of Copilot themselves admit:

GitHub Copilot doesn’t actually test the code it suggests, so the code may not even compile or run. 

https://copilot.github.com/

I warn teams constantly that copying and pasting code verbatim off the Internet is like eating food you found in a dumpster. You don’t know what’s in it. You don’t know where it’s been. You don’t know if it’s safe.

When we buy food in a store, or a restaurant, there are rules and regulations. The food, its ingredients, its preparation, its storage, its transportation are all subject to stringent checks to make sure as best we can that it will be safe to eat. In countries where the rules are more relaxed, incidents of food poisoning – including deaths – are much higher.

Code is like food. When we reuse code, we need to know if its safe. The ingredients (the code it reuses), its preparation and its delivery all need to go through stringent checks to make sure that it works. This is why we have a specific package design principle called the Reuse-Release Equivalency Principle – the unit of code reuse is the unit of code release. In other words, we should only reuse code that’s been through a proper, disciplines and predictable release process that includes sufficient testing and no further changes after that.

Maybe that Twinkie you fished out of the dumpster was safe when it left the store. But it’s been in a dumpster, and who knows where else, since then.

So my worry is that prolific use of a tool like Copilot will riddle production software – software that you and I consume – with potentially unsafe code.

My second concern is about understanding and – as a trainer and coach – about learning. I work with developers all the time who rely heavily on copying and pasting to solve problems in their code. Often, they’ll find an example of something in their own code base, and copy and paste it. Or they’ll find an example on the Web and copy and paste that. What I’ve noticed is that the developers who copy and paste a lot tend to pick things up slower – if ever.

I can buy a ready-made cake from Marks & Spencer, but that doesn’t make me a baker. I learn nothing about baking from that experience. No matter how many cakes I buy, I don’t get any better at baking.

Of course, when folk copy and paste code, they may change bits of it to suit their specific need. And that’s essentially what Copilot is doing – it’s not an exact copy of existing code. Well, you can also buy plain cake bases and decorate them yourself. But it still doesn’t make you a baker.

Some will argue “Oh, but Jason, you learned to program by copying code examples.” And they’d be right. But I copied them out of books and out of computing magazines. I had to read the code, and then type it in myself. The code had to go through my brain to get into the software.

Just like the code had to go through Copilot’s neural network to get into its repertoire. There’s perhaps an irony here that what Codex has done is automate the part where programmers learn.

So, my fear is that heavy use of Copilot could result in software that’s riddled with code that doesn’t necessarily work and that nobody on the team really understands. This is a restaurant where most of the food comes from dumpsters.

Putting aside other Copilot features I might take issue with (generating tests from implementation code? – shudder), I really feel that its a brilliant solution to completely the wrong problem. And I’m not the only who thinks this.

If we were to observe developers and measure where their time goes, how much of it is spent looking for code examples? How much of it is spent typing code? That’s a pie chart I’d like to see. What we do know from decades of experience is that developers spend most of their time trying to understand code – often code they wrote themselves. (Hands up. Who else hates Monday mornings?)

Copilot’s main selling point is like trying to optimise a database application that does 10 reads for every 1 write by making the writes faster.

Having the code pasted into your project for you doesn’t reduce this overhead. It’s someone else’s code. You have to read it and you have to understand it (and then, ideally, you have to test it.) It breaks the Reuse-Release Equivalency Principle. It’s not safe reuse.

And Copilot isn’t a safe pair programming partner, being as its only skill is fishing Twinkies out of the code dumpster of GitHub.

I think a lot of more experienced developers – especially those of us who’ve lived through both the promise of general A.I. (still 30 years away, no matter when you ask) and of Computer-Aided Software Engineering – have seen it all before in one form or another. We’re not going to lose any sleep over it.

The tagline for Copilot is “Don’t fly solo”, but anyone using it instead of programming with a real human is most definitely flying solo.

Wake me up when Copilot suggests removing the duplication its creating, instead of generating more of it.

Wax On, Wax Off. There’s Value In Simple Exercises.

One of the risks of learning software development practices using simple, self-contained exercises is that developers might not see the relevance of them to their day-to-day work.

A common complaint is that exercises like the Mars Rover kata or the Fibonacci Number calculator look nothing like “real” code. They’re too simple. There’s no external dependencies. There’s no UI. And so on.

My response to this is that, yes, real code is much more complicated, but if you’re just starting out, you ain’t up to that level of complicated – nowhere near. When students have demanded more complex exercises, the inevitable result is they get stuck and then they get frustrated and they never manage to make much progress. They’re trying to learn to swim in the Atlantic, when what they need is a nice safe shallow pool to get them started in.

So, these simple exercises help students to build their skills and grow their confidence with practices. They also help to build habits. Taking Test-Driven Development as an example, outside of the design thinking that goes on top of the practice, much of it is about habits. Writing a failing test first is a habit. Seeing the test fail before you make it pass is a habit. And so on.

In tackling a simple exercise like the Mars Rover kata, you may apply these habits 20 or more times before you complete the exercise. That repetition reinforces the habits, just like practicing piano scales reinforces muscle memory (as well as building actual muscles, so that you can play faster and more consistently).

As an amateur guitar player, I try to find time every day to repeat some basic exercises. They have nothing to do with real music. But if I don’t do them, I become less capable of playing real music with confidence.

Likewise, as a software developer, I try to find time every day to repeat some basic exercises. Code katas tend to be perfect for this. When it gets more complicated, I can end up bogged down in the complexity – googling APIs, noodling with build scripts, upgrading frameworks and tools (yak shaving, basically). This is also what happens on training courses. As soon as you add, say, React.js, the whole exercise slows to a crawl and the original point of it gets buried under a pile of unshaved yaks.

In music, there are short-form and long-form pieces. To grow as a musician, you do need to expand the scope of the music you play. Not every song can be a 4-bar exercise.

To grow as a software developer, you do need to progress from simple self-contained problems to larger, interconnected systems. But my experience as a developer myself and as a trainer and coach is that it’s a mistake to start with large, complex systems.

It’s also a mistake to think that once you’ve graduated to catching bigger fish, there’s no longer any value in the small ones. Just as it’s a mistake to think that once you’ve learned to play piano concertos, there’s no value in practicing scales any more.

Those habits still need reinforcing, and when I’ve lapsed in daily short-form practice, I find myself getting sloppy on the bigger problems.

Now, here’s the thing: when I teach developers TDD, to begin with they’re focusing on how they’re writing the code far more than what code they’re writing, because that way of working is new to them. They have to remind themselves to write a failing test first. They have to remind themselves to see the test fail. They have to remind themselves to run the tests after every refactoring.

I try to bring them to a point where they don’t need to think about it any more, freeing their minds up to think about requirements and about design. That takes hours and hours of practice, and the need for regular practice never goes away.

Similarly, after thousands of hours of guitar practice, you’ll notice that I don’t even look at what my picking hand is doing most of the time. The pick just hits the right string at the right time to play the note I want to play, even when I’m playing fast.

It’s the same with practices like TDD and refactoring. As long as I maintain those good habits, I don’t have to consciously remind myself to apply them on real code – it just happens. And the end result is code that’s more reliable, simpler, modular, and much easier to change.

So you may be thinking “What has this simple exercise got to do with real software?”. But they do have a serious purpose and they do help build and maintain fundamental habits, freeing our minds to focus on the things that matter.

As Mr. Miyagi in Karate Kid says, ‘Wax On, Wax Off’.