Chapter 1: Introduction
- this book will teach concepts that can be applied; I love books that teach concepts! I’d far rather apply principles and concepts than rules.
- learn to recognize red flags when reviewing code (your own or someone else’s)
- you can take the concepts in this book too far: it’s important to use moderation and discretion
- every rule has its exceptions
- every principle has its limits
Chapter 2: The Nature of Complexity
- “Complexity is anything related to the structure of a software system that makes it hard to understand and modify the system.”
- I love this simple definition of complexity—it inherently tracks with my own experience and the good advice I’ve read in The Pragmatic Programmer. When software systems lose the ability to be easily changed, you should address that problem. Good software remains easy to change because the one constant in software design is change.
- Symptoms of complexity
- change amplification: “a seemingly simple change requires code modifications in many places”
- cognitive load: “how much a developer needs to know in order to complete a task”
- “Sometimes an approach that requires more lines of code is actually simpler, because it actually reduces cognitive load.” If you’ve ever seem some crazy “one-liner” Python list comprehension implementation that should have been a few lines of code, this is going to resonate.
- unknown unknowns: “it is not obvious which pieces of code must be modified to complete a task, or what information a developer must have to carry out the task successfully”
- of all symptoms of complexity, this is the worst / most expensive
- “One of the most important goals of good design is for a system to be obvious.”
- Causes of complexity
- dependencies
- a dependency exists when a given piece of code cannot be understood and modified in isolation
- the signature of a function creates a dependency between its implementation and its invoking code: if a new parameter is added (in languages that don’t support keyword / named arguments), you’ll often have to modify the invoking code to support the new implementation…especially if, like me, you have a weird habit of enjoying alphabetically-sorted parameters ????
- dependencies are intentional and a natural part of software design, but a goal of system design is to reduce the number of dependencies and make them as simple and obvious as possible
- leads to change amplification and a high cognitive loadtactical tornado
- obscurity
- obscurity occurs when important information is not obvious
- obscurity is often caused by non-obvious dependencies
- inconsistency compounds obscurity, e.g., using the same variable name for two different purposes
- often caused by inadequate documentation, but also excessive documentation is a sign that your system design is not obvious enough
- creates unknown unknowns and contributes to cognitive load
- dependencies
- Complexity is incremental
- not caused by a single catastrophic error, but accumulates from many small decisions
- this incremental nature makes it hard to control
- once accumulated, it’s hard to eliminate since reducing one instance of complexity does not simplify the entire codebase
- to slow the growth of complexity, take a “zero tolerance” approach
- Summary
- “Complexity comes from an accumulation of dependencies and obscurities. As complexity increases, it leads to change amplification, a high cognitive load, and unknown unknowns.”
Chapter 3: Working Code Isn’t Enough (Strategic vs. Tactical Programing)
- John equates tactical programming as a mindset where “your main focus is to get something working, such as a new feature or a bug fix.” He frames tactical programming as being short-sighted—not a good thing. I take some issue with this, but I’ll expound on that later. For now, “tactical programming” is programming in a state of mind where you “…tell yourself that it’s OK to add a bit of complexity or introduce a small kludge or two, if that allows the current task to be completed more quickly.”
- I’ve been in this mindset; deadlines put pressure on us all. It’s long-term harmful to the codebase and to other developer’s attempts to add features without further kludges / workarounds to what you’ve introduced.
- I agree that it’s not long-term beneficial to the system design. You often end up introducing technical debt that never gets paid off.
- I love the term tactical tornado (used to describe the coding “rockstars” that churn out unmaintainable code at a ridiculously-fast rate).
- John couches a strategic programmer as one who thinks about the long-term impact of the code they are writing. If you’re a strategic programmer, you know it’s unacceptable to introduce unnecessary complexity into a codebase, because you feel the cost of that technical debt in your bones. The strategic programmer’s primary goal is to “produce a great design, which also happens to work.” Again, I really like this concept. It fits really well with the wisdom from The Pragmatic Programmer, which lauds code that is easy to change.
- Some strategic investments are up-front: you’re investing, and you know that you’ll be a little slower now (he estimates 10–20%). Don’t try to pay all of this cost upfront—that’s a waterfall project management fallacy. You can’t know exactly what to build up-front…and neither can your clients or project managers.
- Some is reactive: you will make mistakes, and you know that good system design means that you’ll have to address them, continuing to improve the codebase. I love the metaphor of code as a garden, rather than code as a statue. If your code is a garden, you know that you’ll have to adjust the layout as things grow. If it’s a statue, you think what you’ve wrought will be perfect forever just as it is.
- Why move 10–20% sower up front? Because as you continue to grow your codebase and adapt to the constant of change, you’ll want to do that faster. John estimates 6–18 months before your investment pays great dividends.
- What about startups? Well, Meta’s (then Facebook’s) developers used to have the motto of “move fast and break things”. That has since changed to “move fast with solid infrastructure.” Also, Google and VMWare were founded around the same time, but their solid investments have kept them from the technical debt hell that is Meta’s codebase (allegedly…I’ve never worked there).
Okay, now to air some grievances. A lifetime ago, I was in the US Army for a couple of deployments. I would argue that (despite my agreeing with all the underlying principles espoused in this chapter) that John has the wrong metaphor: I think that strategy is tactics at scale. There are elements of strategy in tactics, and elements of tactics in strategy. At the very least, you don’t want someone without first-hand experience in tactics making strategic decisions.
Chapter 4: Modules Should be Deep
- modules are independent containers of complexity: “In modular design, a software system is decomposed into a collection of modules that are relatively independent.”
- e.g., classes, subsystems, or services
- there’s always some cross-dependency between modules: function signatures changes causing argument changes, a method not functioning until another is called
- the goal of modular design is to minimize dependencies between modules
- do this by thinking of each module in parts: interface and implementation
- the interface describes what a module does
- the implementation keeps the promises of the interface
- for this book, a module is anything that has an interface and an implementation
- the best modules are those whose interfaces are much simpler than their implementations, for two reasons:
- simple interfaces minimize the complexity that a module imposes on the rest of the codebase
- if a module is modified in a way that doesn’t affect the interface, no other module will be affected by the change
- “an interface to a module contains two kinds of information: formal and informal”
- the formal parts of an interface are explicitly specified in the code: i.e., signature w/ parameter names and types, return value, exceptions thrown
- the informal elements are not understood / enforced by the compiler / interpreter: i.e., its high-level behavior (deleting a file after downloading), one method must be called before another
- in general, if a developer needs to know something before using a module, this information is part of the module’s interface
- clearly specifying an interface reduces the amount of “unknown unknowns”
- an abstraction is a simplified view of an entity that omits unimportant details; this allows us to think about and manipulate complex things
- a module’s abstraction is its interface
- the more unimportant details omitted from an interface, the better, but the word unimportant is crucial to get right
- an abstraction can go wrong in two ways
- include details that are not important, making the abstraction more complex that necessary and increasing cognitive load
- omit important details, leaving the developer without all the information they will need to use the module
- a microwave is a great example of a good abstraction: simple buttons to control great complexity
- deep modules provide powerful functionality with simple interfaces
- module depth is a way of thinking of cost versus benefit: the cost of a module in terms of complexity is its interface
- depth is functionality, width is interface complexity
- Linux / Unix I/O is a great example of a modules that hides massive amounts of complexity behind a small interface
- module depth is a way of thinking of cost versus benefit: the cost of a module in terms of complexity is its interface
- shallow modules have a complex interface for relatively-limited functionality
- a class implementing a linked-list is an a good example of this: relatively little functionality is provided compared to the interface
- shallow modules are sometimes unavoidable, but they don’t do much to leverage against complexity: the documentation for the methods will probably be longer than the lines of code required to implement the solution
- small modules tent to be shallow
- shallow modules are a red flag
- classitis
- John believes that “the value of deep classes is not widely appreciated today”, arguing that we’re taught to write multiple smaller classes to achieve functionality and that by doing so we increase boilerplate and complexity at a system level.
- I don’t think I’ve ever heard the original objection—that classes should be small just because smaller is better. It doesn’t make sense to me to split modules apart for some arbitrary “lines of code” count. Classes / modules should group together things that would change for the same reason, regardless of the module’s size.
Leave a Reply