Notes and sources on the forgotten lifecycles

Formalized methodologies (SDLC)

SDLC on wikipedia:
This wikipedia article actually slightly mentions the phase-out and disposal phases. Yeah, it's there, don't worry about...

Some sites talking about SDLC: and
These sources actually triggered me to think, no one is talking about what happens after deployment.

What happens to software during it's lifetime

Geoff Greer gives some examples of big software projects that took a long time to adapt. . He also links to, which explains software rot and its effects on the development process. In the discussion on hackernews people argued the rot Geoff is talking about is actually 'ossification' or software entropy. - introduces the Broken Window theory to software development. Main take-away: Something as insignificant as a broken window (in a neighbourhood) can start a chain-reaction of further decay. When you encounter a broken window in your software, be ware, fix it, it's important. - A decent review of effects and causes/sources of software entropy. In my opinion, the attempted mathematics seems a bit far-fetched.

Software entropy arises from a lack of knowledge. It results from the divergence between our communal assumptions and the actual behavior of an existing system.

Hooks into the idea that original developers leave, new hires enter the scene, and software goes to shit. Software === knowledge, only a part of this knowledge is captured (efficiently) in code. So, usually, knowledge about the slowly software diminishes over time.

What can be done about it? (2004)- Martin Fowler, a giant in our field, suggests going the Strangler way: The new software slowly and steadily takes over functions of the old software. - explains 'Strangulation', gives some compelling case studies, and best practices. This is good info for those who find theirselves in phase 3 (rewrite?) and 4 (painful migrations) I mentioned in the article.

About the tips

I came up with this list as a result of witnessing (and being part) of the software development process, lots of trail, lots of error. At some point you start to notice patterns. These patterns i've tried to capture.

About the software design phase: Traditional methods advice us to do deep analyses of requirements, doing designs and testing and validating these. It's not always very agile though, and lots of lean and mean startups don't have the resources (or patience) to follow this route. These methods don't guarantee that our product will perfectly match what our (intented) users want with the software. The best way to learn about a problem is to try to solve it for real users, as quickly as possible. Either way, there is a real chance that our software is designed in a way that isn't conducive for our future plans. This is bound to happen at some point.

About the lifetime of (web)software: It seems to me that, apart from the usual/normal rate of demise of software, the high rate of change in (web)technology has only added to the untenability of 'modern' websoftware. Take for example the AngularJS situation, it become popular in between 2012-2014, and basically killed itself when introducing a version 2. source. What we should learn from this is: Don't put your eggs in a unstable basket period. Don't be persuaded that everything needs to be rewritten in Framework X. Just wait a few years and pick something that is stable. Or, if you do want to go the 'modern' route, than be willing to tolerate multiple paradigms in your project, because you ARE going to be adding a new hip trendy solution every few months. Unless your software is so tiny you can rebuild it every few months in framework x, or you have waste-money to burn.

To put this in context: my advice applies to software that has proven itself to be valuable to a large group of users. Because it's valuable to users it's important to retain this value. Because we want to retain value we are challenged. The churn of unstable technology puts us at constant risk of losing our value. A strong foundation of your software should consist of stable, proven, backward-compatible (some may say boring) technology. This is your basis. On top of this basis you may introduce small applications built using hip technology, as long as these are focussed on doing one thing and doing it well. These small applications don't force us to go in fully blown. If you attain this level, you are not talking about one big monolith, no, you have a 'network of collaborating applications', each with it's own lifetime and lifecycles. The applications are in no way immune to the decay, but, when one applications has reached the end of it's useful life, you're prepared, it won't take down the entire network of applications.

Some people may get the idea that this is 'microservice philosophy'. In a way this is true. But, more importantly, Its absolutely not microservices, as microservices are generally interpreted: creating clusters of small applications that can scale indepently and exchange messages on some message bus. A lot of people think it is simple, but this requires a tremendous amount of expertise to get run and maintain this. Again, this is just a tool, sometimes this is the right one for certain situations.

I'm talking about more of a strategy: Instead of having only one repository that contains the entire thing, we open ourselves up to having multiple isolated applications, for different types of users, for distinct sets of problems. Now, each application can have it's isolated responsibilities. Now we have a network of applications to maintain and run. This is more work but we are more resilient now. New challenges may be implemented in a new application, some application reach end-of-life and may be disposed of. Circle of life. Business is not tied to the lifecycle of one application, the business is tied to a dynamic collection of applications that are maintained.

Return to The forgotten lifecycles of software