Introduction to the concept of visualizing time in projects
The three elements of the ‘iron triangle’ of project management may be simplistic and outdated but still serve a useful function for understanding the essence of a project and the management needed to achieve its objectives. Project management in its modern form, and its equivalent in earlier times required the responsible manager to:
• Understand what had to be achieved to satisfy the client (in modern terms scope, design, quality, reliability, maintainability, function and form; these are all interrelated).
• Understand how much money was available to fund the works and how flexible this constraint was.
• Understand how to accomplish the works and the time available to complete the ‘project’. Then and now this involves a complex set of functions to make sure the right people and materials are available in the right place at the right time to allow the work of the project to progress smoothly.
Understanding shape and form
The perennial challenge facing every ‘project manager’ from antiquity through to the modern day is ensuring that they understand enough of the project client’s requirements to be comfortable they were working on the right thing. The next step is to break down the overall project into multiple smaller challenges and to ensure each work team and each worker understands what they have to achieve. The techniques used to develop this understanding and convey the information seem to have remained fairly constant for millennia.
Narrative, stories and discussion
Talking through the client’s objectives and requirements using effective questions and ‘active listening’ has always been central to building understanding. The only problem with dialog is recording the agreement and actually knowing both sides of the discussion have the identical mental picture of what the finished project will look like. This is less of a problem in relatively slow changing environments that employ well-understood techniques and processes, and where the ‘product’ is tangible (eg, a pyramid, cathedral or ship). When something totally new is being discussed problems of understanding may arise, but this does not stop the creation of ‘use cases’ being a central design element in most software projects.
Models
Simple models of all, or part, of a building or project help the explanation process by providing a central focus for the discussion. There are suggestions that the builders of the pyramids used models to demonstrate proposals.
Edwards (1993) describes two pyramid models carved from limestone: one of a stepped pyramid, the other of a smooth sided pyramid (the ‘new idea’).
By the 17th century models were in regular use. The National Maritime Museum in London has a 1:48 contemporary skeleton model of the St Michael, a 98-gun warship built by John Tippetts and launched at Portsmouth Royal Dockyard in 1669. This is possibly the oldest model that can be connected to a specific vessel (
Ball, 2016). By 1716 the British Navy Board had ordered that all ship drafts (proposals) for new vessels and repairs to be accompanied by a scale model (
Royal Museums Greenwich, 2018).
The reconstruction of St Paul’s Cathedral after the Great Fire of 1666 similarly used a model at 1:25 scale to demonstrate the design intent of the architect Sir Christopher Wren. The model was built in 1673/74 by William Cleere and was designed to be viewed at ‘eye level’ to give a true impression of the interior (St Paul's Cathedral, 2018). Fast forward 400 years and the same techniques are used in the virtual world of today’s design.
Formalised designs
The use of drawings and sketches also seems to have been common practice from the earliest times. The Ancient Egyptians produced plans and sketches of buildings and referenced design texts that held standard formula derived from trial and error such as the
‘Book of Foundations of Temples’ (
Kozak-Holland, 2011). By the early 15th century the concept of perspective derived from architectural plans had been defined by Filippo Brunelleschi (
Dauben, 2018).
The modern concept of engineering design was formalised in the 18th century by French mathematician Gaspard Monge. He published
Monge G (1798), which is regarded as the first book to formalise orthographic projection and descriptive geometry. Orthographic projection is a way of accurately representing three-dimensional objects using two dimensional drawings, usually a top view (plan), a front view and one side view (front and side elevations). In each drawing, the object is viewed along parallel lines that are perpendicular to the plane of the drawing allowing dimensions to be measured accurately from the drawing. This work was used by the École Polytechnique which had been established in 1794 to train all candidates for specialist civil and military engineering roles in the French republic.
The concepts published by Monge facilitated the growth and development of the drafting profession which was linked to the need to manufacture the interchangeable parts required to build and service the machinery of the industrial revolution. This trend was reinforced by the introduction of the blueprinting process, and the economy offered by creating a set of drawings that in most cases made the building of a working model unnecessary.
Understanding cost and budget
Understanding and controlling the cost of proposed projects seems to be as old as the concept of paid labor. However, the application of cost estimating and controls has always been patchy. Some projects such as the construction of Cathedrals in the Middle-Ages seem to have been developed using a ‘keep-going-until-its-finished’ approach that in some cases took more than 100 years to see the work completed. Other projects seem to have been far more controlled.
In 2500BCE, the workers on Pharaoh Khufu’s Great Pyramid at Giza were paid in kind (grain, beer, etc.). With thousands of people on a site in the middle of the desert, calculating the quantities needed to be brought to site each season and the logistics of payment required a very sophisticated system. This degree of sophistication appears to have been well within the capabilities of the Egyptian state’s bureaucracy (
Smith, 2004).
The ability to control costs may have been applied inconsistently through the millennia but there are many examples from history showing that very precise controls were possible. For example, the available documentation for the construction and operation of the Crystal Palace in London for the Great Exhibition of 1851, contained in five reports of the Royal Commissioners, shows a high degree of control. In their second report, the Commissioners predicted a final profit of £173,000. In their fifth and final report the profit was declared at £186,436. 18s. 6d. The project was built in a remarkably short timeframe and the difficulties of taking an incomplete design through to a completed building open to the public in just 8.5 months were recognized by the Commissioners and additional compensation was paid to the builder. However, nowhere in the records are any indications of how this work was Organized and managed to achieve the level of control evidenced by the cost reports (
Weaver, 2014b).
Understanding and managing the available time
The concept of binding contracts with defined scope and costs go back to the Roman era and perhaps earlier. Even without the incentive of a contract the ability to estimate and manage costs requires the ability to:
1. Estimate the amount of work involved in a project;
2. Determine the resources needed to accomplish the work in the available time;
3. Organize the workers to accomplish the work in the allowed time;
4. Deal with emerging issues to maintain the agreed cost and time.
These four elements had to be implemented and used effectively to achieve the successful outcomes detailed above, and similar successes across the millennia. We know from records dating back 6000 years that the mathematics need to estimate physical quantities and work content were available to both the Sumerians and Egyptians (
Mansfield and Wildberger, 2017). The historical records also show that the work was managed and controlled. What’s missing is any indication of the techniques used to implement this management. Was it purely intuition and learned experience? Or were there processes applied to assist in achieving the desired outcomes? The balance of this paper will look at what is currently known about the emergence of specific processes and techniques to help manage project work.
The development of the concepts that allow the visualization of time
Getting the people who are needed to do the right work, into the right place, at the right time, with all necessary tools, equipment and other resources needs a plan! Then the plan has to be communicated to the right people in sufficient time for them to be able to implement it.
It is feasible for the planning to be undertaken in a controlling manager’s mind based on intuition and learned experience. It is also feasible that the necessary information could be communicated to work teams in a series of one-on-one conversations supported by ad hoc aids such as freehand sketches drawn in the sand. However, the process is far from efficient. And even in this scenario there needs to be some framework to allow time information to be communicated effectively.
As the work being controlled becomes more complex, both the planning process and the communication process benefit from the introduction of tools and techniques to assist in:
• the formation of the plan,
• retention of the planning information, and
• the communication of that information to others as needed.
Numbers of people need to be able to ‘see’ how the available time is planned to be used. Achieving this level of visualization requires two key components:
• First, a consistent way of describing time both as elapsed units and as specific points in time, essentially a calendar.
• Second a way of representing complex data that allows the relationship between time and work to be seen and understood.
This section looks at how these two requirements have evolved over the millennia.
The journey to UTC
UTC, the Coordinated Universal Time Calendar that is the default global standard today has its origins in the Middle East beginning some 6000 years ago. Its antiquity helps to explain the unusual arrangement of numbers that make up the standard UTC calendar, 60 s in a minute, 60 min in an hour, 24 h in a day and varying numbers of days in the months and years (
Weaver, 2014c).
The origin of 60 s and minutes
Studies of protocuneiform clay tablets show that 60 was used as a basic unit of counting during the ‘Uruk Period’ in Mesopotamia; an Early Bronze Age civilisation that lasted from c 4000- 3500 BCE (
Ifrah, 2000).
The Sumerians built on this foundation starting in around 3500 BCE. In Sumerian culture astronomy, astrology, religion and the development of calendars were interconnected and important to their religion. Their calendar used a 360 day year and the Sumerians began the modern practice of dividing a circle into 360 degrees to represent the cycle of the seasons and the movements of the stars and planets throughout the year. Refinements continued through the Babylonian and Persian empires.
The origin of 12 and 24 h
The Ancient Egyptians developed a 24 h day but with hours of varying length depending on the time of year. They used sundials to divide their day into 10 h of daytime with 1 h of twilight at each end of the day (making 12 h in total). They also defined 12 h of night-time; this is known from various tables defining the stars visible during the 12 h of night (
Weaver, 2014c).
Consolidation and refinement BCE
During the last couple of centuries of the BC era the Ancient Greeks combined these various systems into the modern form. In one development, they divided the day into 24 h of equal length. In another, based on their knowledge that the world was a sphere, Greek astronomers normalized the lines of latitude and longitude to encompass its full 360 degrees.
Claudius Ptolemy expanded on this base to create minutes and seconds of arc. In his treatise Almagest (circa A.D. 150), he subdivided each of the 360 degrees of latitude and longitude into 60 parts, which were again subdivided into 60 smaller parts. The first division, partes minutae primae, or ‘first minute’, became known simply as the minute. The second segmentation, partes minutae secundae, or ‘second minute’, became known as the second. So although the sexagesimal system is no longer used for general computation, it is still used to measure angles, geographic coordinates and time. But as we all know a year is not 360 days.
The Julian Calendar
The UTC calendar with its 12 months of varying duration has a Roman origin. The Roman calendar underwent a number of improvements in the seven centuries before 46 BCE but still had a large accumulated error. In 46 BCE, Julius Caesar modernised and corrected the Roman calendar by increasing the number of days in most months to 30 or 31 days, to create a year of 365 days. These Julian months have the same number of days as modern months. To keep the calendar aligned with the earth’s rotation around the sun an extra day is added to February every fourth year making the Julian year on average 365.25 days long. This calendar became the predominant calendar used in ‘the West’ for the next 1500 years. European countries used it, and took it with them to their settlements in the Americas and elsewhere.
The Gregorian calendar
The problem with the Julian calendar was it gained about three days every four centuries compared to observed equinox times and the seasons. After 1500 years this error was significant and was corrected by the reform introduced by Pope Gregory in 1582.
The Gregorian calendar retained the same months and the same number of days in each month as the Julian calendar, but changed the way ‘Leap Years’ are calculated. The Gregorian reform modified the Julian calendar's scheme of a leap year every fourth year as follows: Every year that is exactly divisible by four is a leap year, except for years that are exactly divisible by 100, but these centurial years are leap years if they are exactly divisible by 400. This meant that the years 1700, 1800, and 1900 were not leap years, but the year 2000 was. This alteration reduced the mean length of the calendar year by 0.002%; and has resulted in a remarkably accurate calendar that will not need adjusting for several thousand years.
Adoption of the Gregorian calendar was very slow in Protestant and Orthodox countries; the last European country to accept the calendar was Greece, in 1923.
Year numbering
The concept of allocating a number to each year dates from 525AD. The Anno Domini (AD) dating system was devised by Dionysius Exiguus replacing the Roman naming convention with year numbers. Dionysius based the start of his numbering on his estimate of the year of the birth of Jesus of Nazareth (year 1). Today, there is a generally accepted error of around seven years in Dionysius’ starting point, but despite this his year numbering convention has remained unchanged though to modern times.
The 7 day week
The concept of a seven-day week also comes from ancient Babylon. Prior to 600 BCE the Babylonians celebrated a holy day every seven days, starting from the new moon, and adjusted the length of the final ‘week’ in each month so that each month would commence on the next new moon. The Jewish calendar followed the Babylonian’s; but used a continuous cycle of seven-day weeks, celebrating a holy day every seventh day. The early Christians were of course Jews and used the same calendar.
As Christianity slowly spread through the Roman Empire between the 1st and 3rd centuries, the Jewish/Christian week followed (previously the Romans had used an 8 day week). The Julian calendar with a 7 day week became standard across the Empire (including Britain).
The Germanic peoples also adopted the system used by the Romans (although many remained outside of the Empire – common calendars are useful for trade). However, they changed the names of the days to a Germanic naming convention based on their Gods. These names came into English usage as a consequence of the Anglo Saxon invasions during the 5th century and remain through to the present time.
Agreeing the UTC
The International Meridian Conference held in Washington DC in 1884, created the foundation for a standardized global calendar. The conference agreed to define a ‘universal day’ based on local mean solar time at the Royal Observatory, in Greenwich England. This allowed the development of ‘time zones’ and the creation of the international date line in the middle of the Pacific ocean. Coordinated Universal Time (UTC) is based on the Greenwich Meridian and the Gregorian calendar.
Calendars- conclusion
Having a defined way of describing a period of time and a precise point in time goes a long way toward allowing the effective management of time within a project. There were of course many other systems developed in different parts of the world and some are still used. However, the ever increasing importance of global communication networks has moved the modern world toward the universal adoption of UTC.
It’s fascinating to think that this fundamental framework was fully defined with an accuracy of seconds per century in 1582 by monks using quills, parchment and abacus.
The invention of ‘bar charts’ and other representations of work and time
Knowing how long an activity should take, and when you want it to occur, is the essence of planning! Scheduling introduces constraints such as the availability of resources to adjust the expected timing of the work. This section looks at the way planning and scheduling information has moved from being data that is calculated, or innately understood, to formats that allowed people to literally see what was planned.
Duration Estimating
The mathematics needed to calculate durations have been known for 3,700 years or more. The Plimpton 322 tablet is a Babylonian clay tablet dating back 3,700 years. It has been identified as the world's oldest and most accurate trigonometric table. Researchers suggest that the tablet may well have been used by ancient scribes to make calculations for building palaces, temples, and canals (
Mansfield and Wildberger, 2017). The Egyptians had similar capabilities but used different calculations.
The Ancient Egyptians were also capable of managing long lead time items. Probably the best known example being the 43 granite beams used in the roof, and relieving chamber over, the king’s burial chamber in the Great Pyramid. These blocks weighed between 30 and 60 tons each.
Kozak-Holland (2011) estimates a 10 year lead time was needed to cut and deliver the first of these to site. What is less clear is how effective these ancient builders were in coordinating effort across multiple work fronts and predicting the consequences of changes in the plan. This needs the ability to see the interrelationship between activities.
Cartesian representation of data
The combination of numbers and geometry to create a graph was achieved by Nicole d’Oresme (later bishop of Lisieux) in the middle of the 14th century; he used this new tool to analyze quantitative relationships, and extended this doctrine to figures of three dimensions. He considered this analysis applicable to many different qualities.
Cartesian geometry advanced the ideas of Nicole d’Oresme. A Cartesian system uses a pair of numerical coordinates to specify each point uniquely in a plane. The coordinates are the distances from the point to two fixed perpendicular lines. This concept was developed by French mathematician and philosopher René Descartes (who used the name Cartesius in Latin) in 1637, and independently by Pierre de Fermat. Both authors used a single axis in their treatments and have a variable length measured in reference to this axis. The use of an ‘x’ and ‘y’ axis (Fig. 1) was introduced in 1649 by Frans van Schooten and his students (
Weaver, 2014a). This concept is the fundamental underpinning of graphs and charts.
Playfair and Priestly
The originator of graphical schedule control tools appears to be Joseph Priestley (England, 1733-1804). His 1765 ‘Chart of Biography’ is a bar chart (Fig. 2). It plots some 2000 famous lifetimes against a time scale, and “
…a longer or a shorter space of time may be most commodiously and advantageously represented by a longer or a shorter line.” (
Priestly, 1777a)
Priestley also created his ‘New Chart of History’ (1769 – Fig. 3) which used similar concepts plotting the rule of ‘empires’ against geographical location and time. Priestley’s Chart of History lists events in 106 separate locations. He wrote that:
‘The capital use [of the Charts was as] a most excellent mechanical help to the knowledge of history, impressing the imagination indelibly with a just image of the rise, progress, extent, duration, and contemporary state of all the considerable empires that have ever existed in the world.’ (
Priestly, 1777b) As
Sheps (1999) in his article about the Charts explains,
‘the horizontal line conveys an idea of the duration of fame, influence, power and domination. A vertical reading conveys an impression of the contemporaneity of ideas, events and people.’The wide distribution of Priestley’s charts was facilitated by ‘relief etching’, an advance in the printing industry that enabled complex plates to be etched, printed, and then hand colored at a fraction of the cost of earlier illuminated manuscripts.
William Playfair (1759-1823) is credited with developing a range of statistical charts including the line, bar (histogram), and pie charts. He used the same graphical concepts as Priestley in his ‘Commercial and Political Atlas’ of 1786. Playfair’s first Atlas contained 43 time-series plots and one histogram, both the number of charts and their sophistication increased during a series of later revisions of the Atlas (Fig. 4).
The influence of Playfair’s Atlas can be gauged from the fact that the charts included in the reports of the Royal Commissioners on the Great Exhibition of 1851 use exactly the same approaches to displaying data as Playfair had 100 years earlier (Fig. 5).
Karol Adamiecki
There is a gap in the records between the late 1700s and the late 1800s that is consistent with issues identified earlier in this paper. There is no reason why Playfair’s charts could not have been adapted to communicate planned information and used to help manage the complex projects that underpinned the industrial revolution- but if they were used in this way, we have been unable to find any record. The use of charts for planning purposes suddenly appears in the record around the start of the 20th century; common sense suggests there was an evolutionary development of ideas leading to the work of Adamiecki and the others discussed below but the records are still to be uncovered.
Karol Adamiecki (1866 – 1933), was a Polish economist, engineer and management researcher. He developed a methodology for ‘work harmonization’ based on ‘harmony of choice’, ‘harmony of spirit’, and ‘harmony of doing’. The latter requiring the sequencing and scheduling of activities to optimise production efficiency. The chart Adamiecki developed in 1986 for use in this method has become known as a Harmonogram, (or Harmonygraph / Harmonograf- Fig. 6).
The Harmonygraph has a date scale on the vertical axis (left hand side) and lists Activities across the top. Each activity is represented by a scaled paper strip, and the current schedule and duration of the activities were depicted by the position and length of the strips. Actual progress was recorded in the empty ‘right hand’ part of each column. In the header of the columns, the name and the duration of the activity and the lists of preceding and succeeding activities are shown. The strips representing the preceding activities were always to the left of the strip of the successor. The tabulation of each activity’s predecessors and successors in the Harmonygraph (‘from’ and ‘to’), and the mechanics of this process are the same as the calculations in a ‘forward pass’ in modern CPM, making it a distinct predecessor to the CPM and PERT systems developed some 60 years later.
This tool was part of a wider philosophy; Karol Adamiecki emphasized the importance of creating harmonious teams, practical scheduling, and compatible, measurable means of production. He claimed that companies implementing his method saw productivity increases of up to 400%.
Unfortunately, his work does not appear to have been widely distributed. The Harmonogram is known to have made a sensation in 1903 when Adamiecki first described it and the results of its application before the Society of Russian Engineers in Ekaterinoslaw – Poland was part of the Russian Empire at this time (
Marsh, 1976). But despite its success and practical use, the original paper on the Harmonygraph was not published until 1931 (
Adamiecki, 1931), and was (unsurprisingly) written in Polish. From an English/USA-scientific perspective this seems to be the reason his work was not well known in the ‘West’. We suspect though, personal networks within Europe would have spread his influence throughout the continent.
The Langwies Viaduct and the Schürch’ barchart
The Langwies railway viaduct was built in Switzerland between 1912 and 1914 and formed part of the Chur to Arosa narrow gauge railway in the canton of Grisons. The project is of note (and therefore record) primarily because of the innovative use of reinforced concrete in the construction of the bridge (
Schürch, 1915;
Peters, 1996).
In his book
Building the Nineteenth Century,
Peters (1996) suggests that the ability of builders to estimate construction processes with a fair degree of accuracy had grown from
‘none at all’ when the Thames Tunnel was built in 1824-1843 to a stage where by the end of the century,
‘contractors were generally so sure of their organizational abilities that deadlines became parameters of the building process’ (p285).
These early charts were a hand drawn static representation of the schedule. The bar charts correlate activities and time in a graphical display, thereby allowing the timing of work to be determined but sequencing is inferred rather than shown. However, these limited capabilities were fully utilized by Herman Schürch in the planning and managing of this difficult construction project.
The hypothesis we put goes beyond this straightforward proposition to suggest that the quality of the information in the ‘Schürch’ bar chart (Fig. 7) and its supporting histograms are far too sophisticated to be either ‘one-off’ or original, they appear to be part of a well-established engineering practice. The extract below translated from the original German article lends weight to the proposition scheduling was ‘business as usual’ at the start of the 20th century (in Europe at least):
A very accurate graphical building program was set up for the execution of the work, ……. For that, each week was assumed to be five full days of real work, and thus all interruptions, by unfavorable weather, etc., were incorporated. By compiling the demands for each of the individual services in the construction program, a second table …… was created, which showed the total demand of construction materials and the overall effort; appropriate stocks were needed with regards to the uncertain and irregular supply, each had to be provisioned timely. The construction program was generally complied with, and the well-developed construction facilities, which have been designed mainly by Dipl.-Ing. J. Müller, just like the construction program, have excellently proven their value and shown to be very efficient, notwithstanding fairly significant investment costs (Schürch, 1915).
This bar chart and the supporting histograms that can be viewed in the original article strongly suggest ‘project controls’ were a well understood function at the end of the 19th century even if the concept of ‘project management’ was an idea that would not emerge for another 50 years.
Henry L. Gantt
The importance of Henry Gantt to the development of project management and modern business management cannot be understated, but he deserves to be famous for the right reasons. Gantt’s work had two primary components:
• A move away from Scientific Management’s strict imposition of control onto the workforce to an approach based on learning and motivation to drive productivity.
• The use of numerous charts to visually display data designed to highlight issues and problems for management.
The vast majority of Gantt’s charts were filled in at the end of each day. The only predictive chart in his books was the ‘Load Chart’ that was a bar chart, focused on planning the production sequence for batches of work through a machine shop (
Weaver, 2012).
The concept of a ‘Gantt Chart’ did not arise during Gantt’s life; he designed his charts as needed to provide valuable information to management. After his death, Wallace Clark published a book in 1923 called
‘The Gantt Chart, A Working Tool for Management’ that focused on one of Gantt’s later charts used to measure the actual production of a batch against the planned rate of production over a few days (Fig. 8). This is the only ‘Gantt Chart’ (
Clark, 1923).
While modern project management undoubtedly grew out of Scientific Management (
Weaver, 2007), the misnomer of labeling bar charts ‘Gantt Charts’ and claiming they were invented by Henry Gantt seems to be a combination of American isolationism (Gantt’s charts were the first of this type that many American managers ever saw and therefore ‘all charts’ had to be Gantt Charts) and sloppy scholarship that simply reiterates earlier incorrect assumptions (
Weaver, 2013).
The simple facts are there is absolutely no evidence of any sort that links Gantt to project management (
Geraldi and Lechter, 2012), his entire working life and all of his publications are focused on improving the functioning of machine shops and factories. And while he is definitely a worthy successor to William Playfair in making information available to management via the medium of contemporaneously updated charts this process has nothing to do with the sort of planning Schürch was using his charts for.
Even the use of the term ‘Gantt Chart’ was fading out of common usage until the mid-1980s when for some reason the engineers developing Microsoft Project decided to call their ‘bar chart’ a ‘Gantt Chart’.
Other representations of activities over time
Two other significant approaches to identifying and managing time that appeared in the 1930s or possibly earlier are Milestone Charts that simply highlight dates for significant events to occur and flow line charts.
The most famous use of flowline was for the construction of the Empire State Building in 1931; this 103 story structure was completed in 1 year and 45 days (Fig. 9).
The shift from static to dynamic representation of time models
The major limitation of all of the scheduling techniques discussed to date is the static representation of data. To amend or update the schedule you either redrew the diagram or used an eraser to modify the existing diagram. Any change in one part required manual intervention to flow the consequences through to the balance of the diagram. Adamiecki’s Harmonygraph went some way toward facilitating this. By pinning paper strips to the chart (which is why his bars are vertical) and documenting their predecessors, making the adjustments was easier but the process was still manual.
Developing dynamic schedules that automatically recalculated the consequences of a change as well as calculating the overall schedule itself needed the introduction of computing in the1950s.
OR, the underpinning of dynamic scheduling
Operations Research (OR) appears to be the seedbed that gave rise to the almost simultaneous development of dynamic scheduling methodologies in the UK, USA, France and Germany, that can be broadly classified as the ‘critical path’ approach to dynamic network scheduling.
OR is a branch of applied science that informs management decision making. It is an interdisciplinary science which uses methods such as mathematical modeling, optimization, and statistics to support decision making in complex real-world situations concerned with the coordination and execution of an operation within an organization. Most (though not all) OR involves carrying out large numbers of calculations. Consequently, it would seem likely the growth of OR was facilitated by the increasing power and widespread availability of computers from the 1950s onward.
OR started in the UK in the late 1930s; in July 1938, the British Air Ministry conducted a major war-readiness air-defense exercise using its new radar stations. This exercise highlighted serious problems around the need to resolve multiple, and often conflicting, streams of information received from various sources, so that the decision makers had access to the ‘best available’ information in real time. A new approach to information processing and decision support was urgently needed.
To resolve this critical issue, the Superintendent of Bawdsey Research Station proposed a crash program of research into the operational- as opposed to the technical- aspects of the air-defense system. The term ‘operational research’ was coined as a suitable description of this new branch of applied science. The first team was selected from among the scientists of the radar research group the same day (
Operational Research Society, 2018).
What the scientists brought to their work were ‘trained minds’, used to applying the scientific method to develop and test hypothesis based on experimentation and data. The practice of OR was well established in the armed services both in the UK and in the USA by the end of the war.
From these roots, OR appears to have been the catalyst that triggered the relatively coordinated developments of various ‘critical path method’ (CPM) systems in the USA, UK and Europe. The documented links between OR and several of these developments strongly suggest that OR concepts and processes such as linear programming spawned the concepts of CPM. In addition, the regular cross-pollination of ideas between the different OR bodies through conferences and various publications would have been an ideal medium to facilitate the exchange of ideas between the various CPM pioneers prior to the emergence of ‘project management’ organizations more than a decade later.
Critically, OR was an area of interest to Jim Kelley. He was scheduled to give a paper to the Case Institute operations research conference in January 1957 when he was seconded to the Du Pont team being assembled by Morgan Walker that lead to the development of CPM. Kelley’s paper to the OR conference went ahead with the inclusion of a ‘simple linear program formulation’ of the construction scheduling problem’; possibly the first formal recognition of CPM as a process (
Kelley, 1957).
Project management organizations arrived much later and arguably grew out of the spread of CPM scheduling. Two of the earliest project management organizations were INTERNET and PMI (
Weaver, 2007). INTERNET was founded in Europe in 1964, adopting the name an INTERnational NETwork in 1967. It subsequently changed its name to IPMA (International Project Management Association) when the ‘other’ internet started gaining popularity. PMI followed in the USA in 1969.
ADM or PDM, why the difference
The concept of developing the schedule as a dynamic network and the basic calculations are consistent across most of the early systems but two very different styles of presentation emerged. The ‘activity-on-node’ notation used by most variants and the ‘activity-on-arrow’ notation used by two parallel developments in the USA.
The origins of the activity-on-node notation
The Precedence Diagramming Method (PDM) also called ‘Activity-on-Node’ creates a network based on nodes or events that have significance usually involving the work of an activity, connected by lines or links. This basic approach is consistent across several early developments where information on the networking approach remains; this includes:
• The Precedence Diagramming Method (PDM) method published in 1961 by Dr. John Fondahl in his seminal report: ‘A Non-computer Approach to Critical Path Scheduling’ in the USA (
Fondahl, 1961).
• Metra Potential Method (MPM) developed in 1958 by Mr B. Roy in France and the UK.
• RPS (Regeltechnischen Planning und Steuerung) developed in 1960 by Walter and Rainer Schleip in Germany.
The Precedence Diagramming Method (PDM), or at least the development led by Fondahl overtly owes its format to process flow diagrams (
Fondahl, 1987); he describes this type of diagram as ‘circles and connecting lines’ the name was changed to ‘Precedence diagramming’ after IBM computerised this approach to scheduling (
Weaver, 2006).
The ‘process chart’ (ie,
process flow diagram), invented by Frank and Lillian Gilbreth was the first structured method for documenting process flow (
Gilbreth F B and Gilbreth L M, 1921).
‘Process Charts: First Steps in Finding the One Best Way to do Work’ was presented to members of the American Society of Mechanical Engineers (ASME) in 1921. This presentation was widely acclaimed and their concept quickly found its way into industrial engineering curricula. By the early 1930s these ideas were also being taught to, and used by, business management. It seems highly unlikely this development stayed exclusively in the USA and the fundamental concepts of a ‘process flow’ and PDM schedule logic seem very closely aligned.
The origins of the activity-on-arrow notation
The Arrow Diagramming Method (ADM) also called ‘Activity-on-Arrow’ is far less intuitive, ADM creates a network where the work is defined by arrows that connect at nodes, but the nodes generally have no function other than connecting the end of one or more arrows to the start of the next. The two development that used this notation were the original CPM developed by Kelley and Walker for DuPont (
Kelley and Walker, 1989) and the PERT system developed for the US Navy (
Malcolm et al., 1959).
Given James Kelley was a mathematician working for the US Navy prior to moving across to private business in the early 1950s, and PERT was a Navy development, the oral history collected by Chris Fostel and reproduced in Appendix A suggests the Quartermasters Corps is a likely origin for this notation. Fostel reports that ‘Arrows and nodes’ were used by the Corps for planning movements in the Pacific campaigns from 1942 onward and this methodology was declassified in 1956.
The narrative in Appendix A provides a reasonable explanation for both the notation and some of the key terms in CPM such as ‘float’ and ‘slack’. However, the representations used by the Quartermasters Corps were static and their mathematics simple time based calculations. The innovations in CPM and PERT that occurred in 1956-57 used the same diagrammatic base, but applied advanced mathematics to the scheduling problems. Unfortunately, both sets of mathematical innovation largely faded from general use during the period from the mid-1960s to 2000. The potential revival of some of these advanced concepts in modern form is discussed in section 4.
The CPM variant of ADM
The challenge the Du Pont team led by Kelley and Walker had to solve was the time-cost conundrum. They could demonstrate that in preference to flooding a project with labor to recover lost time, focusing effort on the ‘right tasks’ can reduce the time needed to complete a plant shutdown without significantly increasing cost (Fig. 10). The problem was identifying the ‘right tasks’ to compress (
Weaver, 2006). This is a time-cost optimization problem that needs far more complex calculations than the simple time analysis found in most software today.
The concept of mathematical optimization has its roots in the 17th century. The branch of optimization used in this original form of CPM was Linear Programming (LP- also called linear optimization). LP is a method to achieve the best outcome in a mathematical model whose requirements are represented by a series of linear relationships (
Dantzig, 1949), and is specifically mentioned by Kelley in a number of contexts around the development of CPM.
The data tables and computations involved in CPM may have made sense to the Du Pont team but confused management. To explain the process, the first ADM diagram was created freehand by Kelley as an aid to explain the overall scheduling concept to management after the computer had crunched the number and produced its results (Fig. 11). The untested assumption we make in this paper, is that the notation used by Kelley was informed by the diagrams developed by the Quartermaster Corps from 1942 onward (see Appendix A); he worked in the same general area of Navy.
Unfortunately, the calculations used in this form of optimization were taking far too long to process on the available computers and obtaining the range data needed for the calculations was difficult. Consequently, when CPM was commercialised the mathematics were ‘dumbed-down’ to the simple CPM calculations we see today and all that remains of the original calculations is the concept of the ‘i-j’ nodes which came from the matrix needed to set up the optimization.
Kelley believed the difference between ADM and PDM network diagrams was a function of the algebra used to define the problem. In the algebra of parametric linear program used in CPM: ‘
…a job was denoted by a number pair (i,j) …the common subscripting used for indexing two way tables and matrices.’ (
Kelley and Walker, 1989).
The PERT variant of ADM
The PERT initiative also addressed a complex problem; answering the question ‘
what is the probability of achieving a target date?’ The mathematical approach used was simplified to fit within the capabilities of the available computers but provided an adequate answer given the uncertainty of the data being processed. No one had built anything like the POLARIS submarines and missiles before so every estimate was shrouded in uncertainty (
Malcolm et al., 1959).
The success of PERT outside of the Navy is a fact of history. However, its use in the POLARIS program was limited and its results treated with suspicion. The primary use of PERT seems to have been to convince the US Congress their money was being spent in a controlled way. In that respect PERT was 100% successful!! (
Weaver, 2006). The controls conundrum is the fact POLARIS was a very successful program of work, but its ‘star control tool’ was hardly used and was not trusted by management.
Outside of the Navy the concept of PERT became widely accepted and the shortcomings in the PERT approach to probability were overcome by the development of more powerful computers that could process Monte Carlo calculations in a reasonable timeframe, but his development has never become mainstream.
The regression to simple arithmetic
The tragedy of modern project management is that sophisticated modeling applied in all of these interesting developments (including PDM) quickly faded from use. By the mid-1960s there was a consistent approach to CPM that used a single deterministic duration estimate for activities in both PDM and the two types of ADM networks (CPM and PERT). In these ‘new models’, optimization had disappeared completely, resource planning was simplistic at best, and cost projections were a simple aggregation.
Traditional PERT (despite its shortcomings) and Monte Carlo are still used occasionally to assess probability, but are not mainstream. Monte Carlo is a computer intensive analysis used to determining the impact of identified risks (variable inputs) by running simulations to identify the range of possible outcomes for a number of scenarios. A random sampling is performed prior to each run based on the variable inputs to generate the range of possible outcomes with a confidence measure for each. This concept was devised during the Manhattan Program (1944) but needed powerful computers for the technique to be of general use.
The regression to bar charts
With the introduction of computers with a graphical user interface in the 1980s this regression continued. While the best of the available scheduling tools retained much of the capabilities found in the mainframe systems of the 1960s, the ease-of-use of quickly edited graphics saw the majority of project being scheduled using deterministic bar charts drawn ‘on screen’ rather than derived from logic based calculation.
To be fair, the number of projects expanded exponentially during the 1980s and beyond. The concept of the ‘accidental project manager’ became omnipresent outside of traditional project industries, and there was a consequential diminishment of skills and knowledge, particularly in the specialized area of project controls.
The failure of ‘project management’ to deliver on its potential may well be attributed to this loss of skills and diminished capabilities of the overall project controls function (
CIOB, 2008). The correlation between the loss of project controls (scheduling) capability and the apparent increase in the rate of project failure may is a topic for future study. Resolving this skills shortage, at both the technical and managerial levels, is a challenge the profession of project management still has to adequately address.
The future
A number of trends are emerging that may reverse the decline in capability briefly discussed above, these include:
• The concept of BIM (Building Information Modeling) when fully realized has the capability of shifting the concept of planning from an abstract function to something resembling a ‘virtual Lego® set’ where the project team assemble the elements of the project in a virtual environment and the model applies artificial intelligence to constrain the timing of the work to manage issues related to resource requirements/availability, safety, etc. (
Weaver, 2017).
• The ability of scheduling tools to apply optimization to the resource/duration conflict to better balance cost, resource utilization and duration outcomes (similar to the concepts originally used by Kelley and Walker in 1957). Resource optimization is readily available in a number of commercial tools and leads directly to cost optimization but almost no one seems to use the capability.
• The ability of scheduling tools to incorporate active knowledge management to learn from previous project schedules and recommend options to planners as they develop a new schedule. This concept is very new. One product that has this capability is Basis (https://www.basisplanning.com).
The potential for more projects, more often, to get the right people in the right place at the right time to deliver a successful outcome is improving. However, having the technology is only part of the answer. The far greater challenge is convincing management to invest in effective project scheduling and controls, and to develop the skills needed to take the centuries-old practices outlined in this paper forward into the 21st century.
The Author(s) 2018. Published by Higher Education Press. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0)