When do we use a waterfall model

Waterfall model? LOL!

The waterfall model is quite old, but still comes up very often in the discussion of software development processes. The history of this model is largely unknown - it shows the challenges of the model and why it should not be used.

The simple waterfall model describes a process model for software development. It has the following properties:

  1. The process is broken down into individual phases such as requirements, design, implementation, verification and maintenance.
  2. A phase builds on the full results of the previous phase. Therefore, a later phase cannot be started until the previous phase has been fully completed
  3. The implementation goes forward through the phases. Each phase is carried out once.

The visualization is accordingly a waterfall in which the phases are arranged from top left to bottom right. The "water" corresponds in each case to the result documents, which then flow into the subsequent phase. The individual phases can differ in each project.

A prerequisite for this model is that the requirements do not change significantly. Otherwise the result of the first phase will already be based on incorrect assumptions and with it all further phases and their results. In practice, requirements are often not firmly or completely known. Then agile methods that take this into account are the better choice. But let's assume for the rest of the text that this fact alone does not rule out the waterfall model.
Approaches to software development can only be measured by their success in practice. The question now is what experiences have been made with the waterfall model.

The origin

The waterfall model for software development has its origins in the 1950s. At that time, the USA designed the SAGE system (Semi-Automatic Ground Environment). It served to coordinate air defense, especially against a possible attack by the USSR. Essentially, it should generate a comprehensive, consolidated image from the various radar stations and then coordinate the combat against the targets. This is an area where the requirements should be relatively clear and unchangeable.

At that time there were no high-level programming languages, no terminals and no time sharing. There were only punch cards and tube computers. Each SAGE Direction Center received an AN / FSQ-7 computer. Back then, these were the most powerful computers with an area of ​​2,000 m2, 49,000 tubes, 75,000 instructions per second and a magnetic core memory of 65,536 32-bit words.
Because hardware at that time was many orders of magnitude more expensive and less powerful than it is today, projects had to use computing time much more sparingly when supporting development. Therefore, modern software development processes can spend much more computing time supporting developers.

The SAGE project was more expensive than the Manhattan project to build the atomic bomb, the result of which it was supposed to protect against. The main result was the software with a total of 0.5 million instructions. A quarter of the code is the actual system, the rest was used to support development.


In 1956, Herbert Benington described the basic ideas for implementing this system. The paper describes in which phases the implementation should take place and a linear sequence through these phases like in a waterfall model. However, this description only takes up about two pages and represents only part of the working methods presented. In addition, it discusses, among other things, a type of architecture and various tools for developing systems.

In principle, the requirements for the SAGE software are probably essentially fixed, so that such a model does not seem nonsensical. Even if the process is not called the "waterfall model" in the paper, the waterfall model appears to have been relatively successful. After all, the project delivered software that is actually used, even if the project missed the original deadline by a year.

Or not?

In 1983 the paper was published again in the "IEEE Annals of the History of Computing". The editor makes an important observation: SAGE was one of the first systems that was so large that one person could no longer develop and understand it on their own, whereas before that, individuals designed and written programs very successfully. So it was the beginning of the challenge that dominates software development today: to implement a complex software system in a coordinated manner with a team.

Even more interesting than this version of the editor is the author's preface to the republication. The SAGE project initially wrote a prototype with 35,000 instructions. That alone is a significant modification of the simple waterfall model, since all phases have to be run through once for this prototype, while the simple waterfall only provides a one-time run. Twenty people developed this prototype, through which all modules, interfaces and performance requirements were clear. These people have significantly influenced the implementation of the actual system.

The foreword also states that the biggest mistake in the project was the jump from 35,000 instructions in the prototype to more than 100,000 instructions in the real system. In retrospect, it would have made more sense to create a "framework" for 250,000 instructions than to incorporate and further develop the 35,000-instruction prototype in this "framework". An estimated 50 percent of the costs could have been saved. But this is no longer a simple waterfall, because code for new functionalities is added so that at least phases such as design, implementation and verification are run through several times.

Experience with one of the first complex software development projects suggests that a simple waterfall is not a solution for software development - despite relatively clear requirements.

Apart from that, the SAGE system, including the tube computers, was in use until the 1980s. Due to the technological and other changes over the decades, the term "maintenance" seems insufficient for this phase. During this time, new, essential features have certainly been created. To do this, the phases from the simple waterfall such as requirement, design and implementation must be run through again.

Winston Royce took up ideas

Sometimes the invention of the waterfall is also attributed to Winston Royce, who wrote a paper in 1970 on his experiences with software development of large systems. Royce worked in the aerospace sector, in which requirements are rather stable and freedom from errors as far as possible seems desirable from the start. So an area for which the simple waterfall model could be useful. And in fact there is an illustration in the paper that resembles the simple waterfall - the name "waterfall model" does not appear, however.

But here, too, the rest of the paper shows several significant variations from the pure waterfall, for example:

  • It should give feedback from one phase to the previous phase or even further back. This contradicts a single, linear run through the phases.
  • The entire development process should be run through twice. This is reminiscent of the prototype from the SAGE project.

Accordingly, there are further illustrations in the paper that describe other processes that differ significantly from the simple waterfall.

The Leprechauns of Software Engineering

In his book "The Leprechauns of Software Engineering" Laurent Bossavit deals with various myths in the field of software development. The book is a plea for critically questioning the fundamental principles of software development.

One chapter deals with the waterfall model. Bossavit is primarily concerned with the interpretation of the Royce paper. In his opinion, advocates of the agile movement see the waterfall model as a misinterpretation of the Royce paper. From the point of view of the agilists, the paper actually proclaimed an agile-iterative approach. In reality, however, the Royce process does not have to run through the requirements phase again.

Critics of the agile movement see the simple waterfall as a model that nobody proclaims, but which only serves to discredit formal processes. After all, Royce's model is not that inflexible and does not recommend a single, sequential run through the phases. In fact, according to Bossavit, both interpretations are inadequate.

Bossavit further shows that Royces Paper is at least one source of the well-known graphic representation of the waterfall. It became popular because Barry Boehm and his company TRW in the 1980s saw this paper as a justification for existing top-down approaches and cited it accordingly. They republished the Royce and SAGE papers. A little later, Boehm introduced the iterative-incremental spiral model and interpreted the Royce paper as its forerunner, as it had already established a prototype phase. Bossavit points out that Boehm also plays an important role in the history of the waterfall project. Like Benington and Royce before, Boehme does not recommend the simple waterfall, but develops it further.

The myth of a waterfall in the military

US military standards are sometimes cited as additional sources for waterfall procedures. Strictly speaking, however, 1985 MIL-STD-490a discusses specifications and 1985 MIL-STD-483a discusses configuration management. MIL-STD-2167 (1985) defines software development standards. There is actually talk of a software development cycle with different phases. However, according to Section 4.1.1, this cycle can be carried out in several iterations and the phases should typically overlap. This, too, is not necessarily a simple, sequential run through, as provided by the simple waterfall. It is of course conceivable that older revisions of the standards or older standards differ, but recommendations in the field of software development that are more than 35 years old are probably no longer the state of the art.

If Wikipedia can be trusted, it is also possible with the German V-Modell development standard of the public sector to “map the activities of the V-Modell onto a waterfall model or a spiral model, for example”. The V-Modell XT from 2005 is even based on agile and incremental approaches. Even with this model, a simple waterfall is not mandatory.


The people who are credited with the simple waterfall model neither invented nor proclaimed it. And this despite environments in which requirements are presumably relatively fixed and a rigid process was still useful in order to comply with the existing limitations, for example with the hardware that the developers can use during development.

Of course, it is conceivable to be successful with a simple waterfall process - but since Royce at the latest, and thus for 50 years, other process models have been recommended. Actually, these problems have been clear since the SAGE project and thus since the beginning of software development in the team.

If requirements change, this process has other obvious and significant weaknesses, since in principle one would have to start from the beginning in the event of a significant change. So anyone who recommends a simple waterfall does not understand the state of the art 50 or 65 years ago. And in these decades, technology has developed further, which has simplified many things considerably and made it possible to better support the development process itself with software. This further disadvantages waterfall-like processes.

The simple waterfall probably serves primarily as a counter-model to agility and thus as an example of how it should not be done. But that ignores the modifications that were used from the beginning.

Perhaps it is also tempting to finally do a "clean" process. It may seem intuitive to take things to the end, consolidate them, and then build on them. I, too, have already caught myself evaluating such an approach according to phases as the "actually correct" and "clean" way. A simple waterfall can also give the illusion of control: there is a clear division into phases with clear results. But problems often don't become apparent until software ships and users actually use the software. The simple waterfall delays this feedback because all phases must first be completed instead of delivering a partial solution as quickly as possible and collecting feedback on it.
Finally, a suggestion for an experiment: the next time someone suggests a waterfall process, ask them who invented and recommended the process.

tl; dr

The simple waterfall process was not recommended 65 years ago - and software development in the team was only just beginning at that time. Research in original sources is helpful.

Many thanks to my colleagues: Lena Kraaz, Martin Kühl, Tammo van Lessen, Torsten Mandry, Jörg Müller, Joachim Praetorius, Gernot Starke and Stefan Tilkov for their comments on an earlier version of the article.

Eberhard Wolff

Eberhard Wolff (@ewolff) works as a fellow at INNOQ. He has been working as an architect and consultant for more than 15 years - often at the interface between business and technology. His technological focus is on modern architecture approaches - cloud, continuous delivery, DevOps, microservices or NoSQL often play a role.

Read CV »