Introduction
No one can deny there is a rapid increase in the reliance upon computers to perform a variety of tasks. Ubiquitous computing has become a major technology paradigm that extends beyond the early work at places like Xerox PARC, IBM Research and HP Laboratories (
Krumm, 2016). New efforts focus on developing machines that copy complex decision-making techniques traditionally made by humans. This includes so called artificial intelligence (
Warwick, 2013), such as self-driving automobiles (
Ohn-Bar and Trivedi, 2016), or decision-making in healthcare applications (
Tsoukalas et al., 2015). Human-technology interaction has moved beyond occasional human intervention to systems with ongoing collaboration between the human and the machine, thereby increasing `co-dependence with' or `training of' algorithmic systems and resulting in people being effectively subsumed into the algorithmic landscape (
Applin and Fischer, 2015).
In this context, the term Machine Intelligence (MI) is used to describe the performance of computers when they are able to select actions from options required to arrive at specific desired outcomes (
Cotter, 2015). As we make the transition from decisions resulting from purely Human Intelligence (HI) toward MI, we find ourselves in some state midway between, where most decisions are made with a joint HI-MI contribution (
Jacko, 2012;
Helander, 2014). Initial applications of HI-MI include mobile crowd sensing and computing (MCSC), where a collective knowledge discovery paradigm allows data fusion and machine intelligence supported by associated collaboration modes (
Guo et al., 2015). However,
Cotter (2015) notes there is a relative void in this emerging research area into how to manage these processes and there is an increasing need for a body of knowledge geared toward HI-MI decision-making and governance.
To illustrate this emerging need we can consider the case of the self-driving automobile (
Blyth et al., 2016). Some circumstances must occur where the letter of the law cannot be adhered to (such as an obstacle in the driving lane requiring crossing a solid line) and judgment beyond the current state of MI is required. However, it is not clear where, how, or by whom such decisions addressing these types of scenarios will be made. Consequently, this article is presented to demonstrate the urgency in the development of robust guidelines to support HI-MI decision-making. We propose that the collective sum of human knowledge necessary to assess the decisions made by machines is at risk of decreasing with increased use of MI. Human reliance upon machines may cause humans to increasingly view the machine results as the correct answer and humans may be unable to provide more optimal solutions compared against the machine offerings. We contend that this represents a significant risk to the operation of engineering systems, when such risk is not mitigated by effective decision-making structures and supporting processes.
Symptoms of the problem
At the close of the 20th century, some people speculated the majority of existing computer programs risked failing due to misinterpretation of the two digit year ‘00’ as either 1900 or 2000. This was known as the Y2K bug (Snow and Keil, 2002). Companies expended large sums to prepare against this Y2K risk, with USD $30 million being common in larger organizations (
Kennedy, 2010). A participant at one technical society meeting in 1999 asked a seemingly straightforward question, “Why cannot these companies simply look at the programming code and logically determine how the year 00 will be interpreted”? The answer, it seems, is that although humans created the code viewed as a risk, increasing complexity reached a point beyond the ability of these humans to understand how programs function in unfamiliar circumstances. The common chosen step forward for the Y2K risk was replacing the legacy programs with newer hardware and software capable of registering a 4 digit year.
A retired engineering management faculty member was in a bank to co-sign a mortgage for his daughter. The quoted monthly payment was much higher than the quote to the daughter. The loan officer, who regularly entered similar loan information into the bank’s financial system, was not convinced any mistake was made. Using a spreadsheet and classic formulas (
Newnan et al., 2013), the professor calculated a payment lower than either quote provided by the bank. Digging further determined the bank’s computer program was automatically including life insurance premiums for the borrower. Since the professor was in his 70s, the estimated premiums added almost USD $600/month to the quote. The loan officer was university educated in business and had manually calculated loan payments in the past. Despite dealing with mortgages continually at work, the output of the machine became accepted as the correct number to this human user and the manual mechanism for arriving at the calculated values was questioned.
People old enough to remember grocery shopping before bar codes will recall that the cashier job was a higher skill job than at present. The cashier was expected to know the prices of all produce sold by weight and could quickly count cash and calculate appropriate change. A dishonest cashier could ‘short change’ a customer by mentally calculating a lesser amount to pay back from large denomination bills and pocketing the difference (
Murphy, 2016). As foreseen by
Sullivan (1990) the introduction of smart registers has eliminated cashier as a skilled occupation and replaced it with a job requiring scanning items and handing the customer a device to enter payment. In a study by
Hardesty et al. (2014), scanning errors occur at a rate of about 1 in 30 items (which exceeds the US Federal Trade Commission’s standard). These are generally not caught by the cashier and customers tend to only report errors not in their favor. On a personal note, there have been times when we purchased items that were underpriced by 90% or more. On those occasions when we alerted the cashier of the likely error, the items were re-scanned and when the same greatly underpriced value was registered, the cashier shrugged and continued with the transaction. The price displayed by the machine was accepted. As well, during cash sales when the change is close to a round dollar value, under the manual system a cashier would ask the customer if they had the small amount to add to avoid handing back large amounts of coins. Under the automated systems, cashiers are typically confused when customers hand them a few coins in an attempt to make up this difference, since they are no longer routinely doing this type of mental math at their job (
Wehmeyer, 2015).
A local independent auto body shop considered joining a national chain. A term of joining is that fees for all work are calculated using an application required by insurance companies for their work. This shop owner realized that proceeding would eliminate an ability to set prices based on customer driven factors, such as labor only jobs (with parts supplied), or providing discounts for attractive circumstances. Changing systems risks losing an established customer base since this shop would be largely indistinguishable from the next franchised shop. Not discounting for strategic purposes is counter to established best practice recommendations (e.g.
Flynn, 2010). In this case, the AI system greatly impaired the individual operators in creating strategic advantages for their businesses.
In the examples above, the main point we wish to highlight is that implementing an MI system resulted in decisions that may not align with the goals of management if they were able to see the consequences. The bank risked losing a customer since the quote provided was much larger than what the actual payment would be. For the Y2K systems, the users were not able to understand how the system would react to new situations. For the auto body repair shop, the established prices did not allow for modifying to suit certain situations. For the retail outlet, the stores are losing when errors allow transactions to take place much below the desired price. For the case of the bank, the user knew how to calculate the answer at one time but the automated answer was not challenged or even suspected of being in error. For the retail outlet, the workers prior to the implementation of MI had highly developed skills that were no longer sought in the post implementation workers. For all the examples, there was no explicit decision by management to accept these outcomes. The implementation of the automated system is not likely to be reversed and the consequences are set. As well, once implemented the MI decisions are generally not challenged by the humans who previously made them and, over time, the humans lose the skills required to understand the workings of the machine.
The game may be changing
Skepticism of the ability for machines to match or exceed human decision-making has deep roots in the history of the existing engineering management body of knowledge. Some older textbooks (e.g. see
Upton, 1998 or
Chase et al., 2004) note how attempts to automate more complex processes typically produced unsatisfactory or marginal results. The USD $1 billion expenditure by K-Mart for automation is offered as a major cause of their significant loss of market share and profitability (
Coleman, 2000). Introducing an ERP (enterprise resource planning) system at Hershey and a subsequent loss of customer confidence is used as evidence of the dangerous allure of automation (
Barker and Frolick, 2003).
An extensive study of the financial benefit of automating human processes performed a decade ago (
Vemuri and Palvia, 2007) found there was no clear evidence to justify the considerable sums companies continue to invest. There was no doubt, however, that the investment continues whether or not it is supported from a fiscal viewpoint.
Vemuri and Palvia (2007) note once the initial large investment is made, subsequent improvements and expansions do provide clear positive paybacks within the new automated environment. That is, once the change is made, regardless of the merit of that choice, it does make sense to continue down the same automated path.
As the role of machine intelligence increases, it behooves engineering management specialists to better understand the consequences of present decisions and to ensure they stay current in the state of MI integration; not pursuing such a course may be blindly walking into accepting potentially significant risks to the operation and performance of many HI-MI engineering systems that are in operation today. In this regard, a tour of a new processing facility by one of the authors sheds light on the potential for rethinking standard management practices. The company toured provides medical related products to the healthcare industry and has USD $150 million in annual sales. They have five manufacturing plants in different urban centers and are building one more. The particular plant toured was completed in 2015. Labor represents 60% of their expenses. However, machines determine the pace of production with humans required at certain steps for processes not yet automated. The humans either keep up to the pace set by the machine or are replaced by people that can. In such an environment, the company’s executives expressed no interest in the staples of traditional operations management. The principles of the learning curve (
Adler and Clark, 1991) and continuous improvement (
Fryer and Ogden, 2014) to drive lower production costs were considered irrelevant. The company noted that there were start-up inefficiencies but after the bugs are worked out they notice no material improvements in production over time. Efficiency is driven by improvements in technology following each new plant start-up. The new plants level out at higher production rates per person than the older plants. Efforts to improve individual human productivity (
Samnani and Singh, 2014) or programs to promote highly effective teams (
Douglas et al., 2015) would have limited impact on the machine driven production rates. Finally, employee turnover was typically 30% a year. Although the company would prefer lower rates due to the cost of hiring and retraining, they were unconcerned with either a loss of organizational knowledge (
Hausknecht and Holwerda, 2013), nor retaining or replacing a key lost employee (
Durst and Wilhelm, 2012). As more businesses move to MI driven processes, engineering management specialists may find themselves unable to demonstrate how they can add value. As the experience of this company suggests, the processes are established at start-up and it is difficult to claim more optimal options could be possible and even more difficult to implement.
Cases in the literature
Trägårdh, Carlsson and Edenbrandt (2015) investigated situations in the medical profession where machines work in line with humans. Humans can provide considerably different medical diagnoses based on many factors, including situational circumstances, experience and skill. When medical decisions are based on computer diagnoses, the variability generally stems from which software is used. Where the human diagnoses are aided by computers, variability between physicians is removed when the same software is used. This implies that the HI is brought in line with the output of the MI. In a medical setting, the decisions are generally based upon large numbers of occurrences and outcomes are clearly recorded for particular inputs.
Hedén et al. (1997) tested diagnosing heart problems from a collection of over 10,000 EKG’s (electrocardiograms). They compared the ability of a heart specialist to find patients with problems against a computer programed to recognize the symptoms from the EKG data. They determined a highly experienced human practitioner was almost as good, but the machine did perform better.
Gawande (2010) notes where computers and humans are jointly used to make diagnoses, there is general agreement between the HI and MI in most cases. When there is a difference, the computer is more likely to be correct.
Gawande (2010) recommends that including the human in the process only slows down the system and does not improve accuracy. Even so,
Gawande (2010) provides anecdotal situations where the senior physician did overrule the machine diagnosis and the subsequent treatment did support the medical doctor’s assessment, thereby showing that the MI was not perfect.
Taylor and Cotter (2016) surveyed 77 aircraft pilots and determined reliance on machine intelligence creates complacency in the pilots. This poses a risk of pilots being unsure of what to expect from the machine controlling an aircraft. There is also a risk when problems occur beyond the computer’s ability and the pilots are unable to quickly determine the proper corrective action. One incident occurred when a particular transmitter did not function properly and the computer could not ‘see’ the deficiency. When manual overrides were required, the pilots forgot that the required actions were not the same as when they trained on a manually controlled aircraft.
Human ability to assess complex situations decreases with lack of use. In a study of taxi drivers,
Maguire et al. (2000) found that the hippocampus (the part of the brain associated with spatial orientation) increased in size with decades of experience navigating the streets of London. Regular GPS (global positioning system) users have a lower ability to navigate on their own without MI support. Brain imaging techniques show that the hippocampus is very active during spatial interpretation, but is inactive while driving using a GPS device. The equivalent portion of the brain of mice atrophies when they are prohibited from making decisions in navigation (
Robbins, 2013).
Brown and Laurier (2012) assert that although the use of GPS deskills a driver in spatial orientation and increases reliance on technology, this is augmented by a requirement to learn a new set of skills. An example is the need to know when to intentionally deviate from the instructions provided. With the absence of experience for performing the specific task unaided by technology, however,
Brown and Laurier (2012) also show that the human will simply follow the suboptimal solution provided by machine intelligence.
Non-destructive testing (NDT) is an established technique to detect defects in welds and is an essential part of construction projects (
Georgiou, 2009). Significant work is underway to automate NDT since it is labor intensive and requires skilled operators. Machine driven approaches are likely to be implemented in the future but the current state of the art is still contingent on the human input, for example, in regard to interpretation of weak signals detected through ultrasonic detection of possible weld defects. For a related example, see the work on the issues associated with NDT of steel bridge maintenance and restoration (
McCrea et al., 2002). In this scenario, there needs to be improved HI-MI decision-making so the human operator can work in collaboration with the machine-driven fault diagnosis system.
Engineering was not an exacting discipline when performed solely by humans
Henry
Petroski (1985) explored the development of many common items in use today, as well as the historical perspective of how engineering design has evolved. His books are required reading in many engineering curricula (
Nichols et al., 2000).
Petroski (1985) explains that engineers have a very imprecise understanding of the reality in the elements they design. In civil engineering,
Petroski (1985) states that we have empirical tools to help design structures. Where these tools lead to large factors of safety, there may be no change to how we calculate. Where the assumptions over-estimate the robustness of the design, a subsequent catastrophic failure causes a review of the tools used to produce that design.
Petroski (1985) foresaw computers replacing slide rules as the calculating tool of choice for engineers. He predicted that engineers may lose some ability to grasp the overall magnitude of a result by relying heavily on a machine that will provide an answer to 6 decimal places and become over confident in results repeatable to exact numerals when calculated many times to ‘recheck’ the work.
Kennedy and Whittaker (2000) interviewed engineers from a variety of industries about their formal company documents intended to direct their work. The detailed content of these manuals were a very unreliable guide if taken at face value. Some organizations directed their younger engineers to not look at these documents for fear they would not know which sections had misleading information. These manuals have false credibility derived from being bound in a hard cover. Some of the time, the writer of such manuals was only given that task as a way to make work for a less productive individual during a slower period. Design standards commonly had major errors in the information provided. For example, one design standard had two charts for required test pressures for their facility. One chart was in imperial units and the other was in SI units. Using proper conversion factors, the SI chart showed test pressures 100 times higher than the chart in imperial units and would have resulted in a catastrophic failure had the chart ever been followed. The standard was in circulation for approximately ten years, but none of the engineers questioned could offer an explanation as to why the error had never been noticed.
As consultants, the authors have also witnessed recent challenges in design and fabrication reviews. With laser technology, the ability to measure to great precision has become easier and cheaper than in the past. In one instance, specifications required certain 30m long steel columns to be fabricated within 2cm of straight over its length. The computer-based finite element analysis determined excessive bending stresses at the base with more than the 2cm offset for the vertical load. As the columns were made from flat steel plate, the warping inherent in welding required repeated cutting and re-welding to keep the structure true. The lead design engineer suspected that the bending moment induced would slightly deform the supporting foundation and this would alleviate the problem. However, without being able to produce more than speculation, the remediation work continued and a final product that met the criteria was $125,000 over budget due to the extra effort required. This company manufactured similar structures for many decades. They were asked how they dealt with the problem in the past. The answer was that the company did not have the sophisticated modeling ability nor the laser tools to measure the straightness as precisely in the past. None of the older structures had problems with moments at the base, suggesting the remediation work was not required.
If an MI modeling system is developed based upon traditional engineering principles, one can see how potential problems could result. The assumptions made by humans to create designs may have served well within certain constrained parameters, but the science behind the methods may be drastically different than these assumptions. The people building an MI system may not be technically savvy and not realize the design principles are only empirically valid in a specific range. Lastly, there may be errors in the information used to develop the model that have existed for years and only those closest to the design work may realize this information is flawed and should not be used. Tacit knowledge is identified as that set of skills that cannot be easily expressed and would be next to impossible to convey into code for a machine to imitate.
Managing the human versus machine intelligence decision process – A case study
Methodology
This case study is based on a narrative research approach (
Lieblich et al., 1998), where events associated with a specific situation related to HI-MI decision-making are reported. The observations were collected while working within the organizations in project management roles and are not the result of focused research specific to HI-MI topics. The study of HI-MI decision processes is an emerging area of research and there is currently a lack of studies focused on the problems that may arise along with supporting frameworks to enable decision-making processes. Consequently, this qualitative research technique was selected in order to provide sufficient engineering context as well as a real-world application in order to advance the HI-MI research agenda.
The case study includes reference to a stress analysis model, which was built with a widely used deterministic, finite element based software package. The engineering design drawings were prepared using standard CAD (computer-aided design) software. The stress analysis software extracts the required information from the CAD data. These methods are decades old as described by
Dori (1989). An algorithm recognizes dimensions from drawings based on a syntactic/geometric approach and specific deterministic finite automation (DFA). Humans are still required to provide additional required parameters for the model to determine peak stresses.
Background and engineering context
The engineering case studied for this article used stress analysis software to aid in the design of a piping system for facilities very similar to those in use by the company for the past 50 years. The fluids involved had an operating temperature range of -20 to 30°C, a relatively narrow range in the piping design industry. The piping required approximately 100 connections of NPS (nominal pipe size) 16 pipe as shown in Fig. 1. A design specification required a minimal load of<20 kN at the connecting flange at location A in Fig. 1. The equipment downstream as indicated by the arrow is sensitive to a load at the flange.
For the first 50 years of the operating company’s history, engineering design work was performed in-house. For large projects, shortfalls in staffing would be augmented by bringing in contingent workers hired on temporary contracts for the peak workload. If large engineering firms were utilized, their workers would be seconded into the owner organization to become part of the overall structure. At the time of this case, a shift in the management of projects was observed within industry in general (e.g. see
Burdon and Bhalla, 2005), where operating companies increased their dependence upon outsourcing the engineering design function as a discrete transfer to large engineering organizations, such as Fluor, Bechtel, or Jacobs (
Bamber et al., 2016). The operating company in this case similarly outsourced the design of a major expansion project to a large engineering firm. The quality of design was assured by the service provider being ISO 9001 (International Organization for Standards) certified for their established design and review processes.
When the design was 50% complete, the engineering provider notified their customer that the traditionally used piping arrangement creates too great a stress at the critical position A. The stress analysts reached this conclusion using the engineering firm’s standard software application for modeling the process piping described above. The unconstrained thermal expansion on 3 m of pipe over a 50°C temperature range is 1.8 mm. A schematic diagram of this movement, under the constrained conditions, input into the model is shown in Fig. 2. The force this produces in NPS 16 pipe if fixed at both ends (zero movement allowed) as assumed in the stress analysis program is approximately 1800 kN using standard calculations (
Beedle and Tall, 1960).
To compensate for the load produced by this thermal expansion, an alternate piping arrangement was proposed, which is shown in Fig. 3. Such expansion loops are common in long runs of process piping with high temperature changes (>200°C) (
Pollono and Mello, 1979). Due to the extra weight of the added pipe, an additional support was designed and is represented at position C in Fig. 3.
A rough cost estimate for the materials and installation to accommodate the proposed piping with the expansion loops was provided as USD $15,000 each. Since there were approximately 100 such connections, this proposed change came with an estimated extra cost of USD $1.5 million for the entire facility. The change adds approximately 1000 m of pipe to the process flow, with the associated operating pressure loss plus additional maintenance. As expected, the customer questioned the veracity of this proposed change, given the 50 year history of operating similar facilities without experiencing any noticeable problems despite not having these proposed expansion loops. The customer therefore requested a thorough review of the design that determined this change is necessary.
Testing the human versus machine results
In response, the engineering design firm re-entered the relevant parameters into their model and re-ran the stress analysis software. The results were consistent with their previous run, analogous to the situation with the bank worker in the aforementioned case. When the customer was still not satisfied, the stress analysts deferred their work to the national home office to have their conclusions verified. As with the diagnoses performed by MI aided medical workers as reported, the re-entering of the same data by other engineers in the home office produced the same results.
Having demonstrated and verified their original recommendations, the engineering design firm pushed to have the proposed changes approved. Since the design and construction drawings were to be stamped by the engineering firm, their ISO 9001 certified processes did not allow for any deviation from the standard procedures which required using the stress analysis software to validate all proposed designs. The engineering firm was hired as an expert and there is a reputational impact if work is rejected by a client, thereby questioning the firm’s professional competencies and credibility. Unlike MI, humans may let initial conclusions bias the way new information is integrated into their original understanding, commonly termed ‘confirmation bias’ (
Irani et al., 2015).
The customer organization had heavily experienced people on staff who began their own investigation into the perceived problem. Their initial suspicion of the proposal was based on the absence of any problems with the original piping throughout the company’s history which remained unchanged for several decades. Reviewing the stress analysis, the customer’s engineers noted the model was very conservative. For example, the selected temperature range used was 50°C. The engineering firm’s personnel stated since it could not be known for certainty what the ambient temperature would be when the piping was installed, the safest scenario is to allow for the full range of possible expansion.
Prior work suggests clients may find decisions made by outsourcing design are much more conservative than the client would make if the choices were made internally (
Kennedy and Whittaker, 2002). The client engineers in this case recognized the restriction on the loads at the nozzle indicated at location A in Fig. 1 were not based on a concern for any safety risk. The load limit was imposed because stresses downstream of that point could impact the life of certain sensitive components in the system. In addition, the normal operating temperature range of the piping was 4 to 14°C. Only in rare circumstances would the piping see the extreme design temperatures of
-20 to 30°C. The experienced operators found the vibrations in the operating equipment tended to alieve initial stresses by slight shifting at bolted connections or supports. The client engineers concluded that a more representative temperature change for calculating expansion would be+/
-5°C using 9°C as the initial state and this assessment was based on their mental model of the engineering system derived through years of experience.
The client engineers questioned the stress analysts who performed the data entry and asked how the engineering model treated the neoprene pad located at location B in Fig. 1. This pad was added at some point in the company’s past and was intended to compensate for thermal expansion. This pad had been sized so its compression approximated the estimated flexibility of the “rigid” equipment downstream of location A. This compression is 180 kN/mm, which sounds very inflexible initially, but given the small expected expansion of 1.8 mm for a 50°C temperature change, the neoprene pad was determined to be sufficient to absorb the thermal stress. The stress analysts replied that the particular program they used only had accommodation for one of three choices of rigid anchors, sliding supports or no support. It was an inherent limit to the program that the neoprene lined support and the small flexibility of the equipment could only be treated as a fully rigid anchor as the best approximation. Using the 5°C temperature change and allowing for a small amount of compressibility of the support and equipment as represented in Fig. 4, the client engineers then hand calculated the stress. For the original design this is 80 kN due to thermal expansion or only 5% of that predicted by the engineering firm using the MI driven model.
The client organization deemed these loads acceptable and the proposed expansion loops were not required. Despite this reasoning, the stress analysts stuck to their conviction that the results provided by the engineering model should be used and resisted accepting the original design; they essentially resisted inputs derived from mental models based on tacit knowledge and experience (see
Reber, 1989 for background material on the concept of tacit knowledge).
One client engineer worked for the client company for 20 years. At the time he started, an engineer with 30 years’ experience passed on to him that in order to assure low stress at the flange in question, no support should be near location A. This was not followed by the MI driven plan which called for a support as shown at location C in Fig. 3. The client engineers assessed the proposed support and determined that it would act as a fulcrum when the piping contracted at lower temperatures as shown in Fig. 5. Problems likely occurred in the company’s past, which led to the tacit knowledge in the engineer who worked there decades ago. The client engineers also determined that when temperatures rose and the piping expanded, the piping would rise off the support, as shown in Fig. 6. Neither of these scenarios was evident in the MI driven model. The client engineers concluded that the expense and effort to construct a support at location C either aggravated the problem it was intended to solve or else did nothing. Since the client could not conceptualize any real benefits to the proposed USD $1.5 million additional expenditure, and suspected it may be a worse design that the original, they did not approve the change. The comment was made by the client that the better solution was not to spend resources to alter the design to satisfy the engineering model but rather to alter the model to better reflect the physical reality.
Resolution of the HI-MI conflict
The engineering case described above highlights the lack of guidance for decisions involving situations where HI and MI reach different conclusions. It also illustrates how rigidly sticking to an engineering model, whether deterministic or stochastic based, can result in negative consequences where the underlying assumptions of the model are not valid and do not consider tacit knowledge. It also suggests that when HI and MI are integrated, the HI may be adjusted to match the conclusions reached by the MI as was observed in the introductory examples.
The engineering firm was contracted to design the facility in accordance with their ISO 9001 validated processes and the drawings were to be stamped by the firm’s responsible professional engineer. The client concluded that the MI driven design did not meet the HI determined requirements. The engineering firm maintained their design was the proper choice and did not agree with their client’s assessment. The engineering firm produced a full set of stamped construction drawings incorporating the proposed expansion loops within the completed design. To avoid any contractual disputes, the client paid for the services in full, including the resources required to prepare, verify, and debate the stress analysis exercise. The cost for this last portion including producing the drawings for the expansion loops was approximately USD $200,000.
The client organization produced their own set of alternate drawings for the piping in question using their original design details. The pertinent stamped drawings were removed and these new drawings were substituted in. The facilities were constructed in accordance with the details of the hybrid drawings. Questions regarding the overall responsibility for the design from a professional engineering perspective were left open. The company deemed the risk of future consequences for this ambiguity were acceptable to assume without resolving at this juncture. The engineering firm maintained their conviction that the MI driven design was appropriate for this application. The engineers working for the client kept their view that the HI driven design was best. No definitive resolution to the differences, the final responsibility for the design, or even the need to revise the capabilities of the stress analysis model was reached. The stress analysis model therefore continued to be based on a flawed set of underlying assumptions and did not properly integrate experience driven inputs from the engineering operators.
Kennedy and Whittaker (2002) determined external consulting firms typically place a high priority on being conservative on design and are less concerned about cost performance. Owner engineers may be more willing to risk failure from a design change and gain on organizational learning.
Implications for the engineering manager
It is useful to examine the factors that potentially contribute to HI-MI decision-making and analysis as in the case for multi-criteria decision-making (MCDM) (
Triantaphyllou, 2013;
Kahraman, 2008). The need for improved decision-making tools was highlighted in the literature review and the case study investigation, which would have benefited from such tools being available to the engineering team. Such a decision-making framework is contingent on multiple selection criteria being available that allow an informed decision to be made on the best action alongside other competing actions. An MCDM approach allows HI-MI decisions to be taken that are related to both human and machine based factors (i.e. the criteria), recognizing that such factors are not necessarily mutually exclusive although in many cases it will be possible to associate primarily to one or the other. HI-MI situations represent complex systems and we propose a supporting framework be developed based on multiple factors. Adopting a process of inductive reasoning (
Klauer and Phye, 2008) allows specific instances to be converted into generalized conclusions. Therefore the authors considered the literature related to HI-MI decisions as well as the insights from the engineering case study and through inductive reasoning it is possible to derive four main groups of factors (or decision criteria) that have the potential to significantly impact HI-MI decisions. These are human factors, machine factors, knowledge factors, and process factors. This group of factors represent a holistic view of the decision criteria for HI-MI systems, including both techno and social aspects, which is needed for interconnected systems where humans are interacting at multiple levels within the system (
Vespignani, 2009).
Human factors include the features and mechanisms through which humans interact with other elements within a given system (see
Stanton et al., 2013). Conversely machine factors relate to the design, manufacture and operation of the system, such as an electromechanical actuator driven by an electric motor used in a manufacturing process (see
Kalpakjian and Schmid, 2014). Human operators and machines are clearly dependent on the availability of knowledge (in the form of engineering data and generated information) and hence it is appropriate to consider knowledge based factors (see
Studer et al., 1998). Finally, the process adopted for knowledge to be utilized by either the human operator or machine is important and hence it is appropriate to consider process factors (see
Becker, et al., 2013).
We can consider the definitions for these groups of factors (or criteria) in regard to establishing an MCDM for HI-MI decisions and these are as follows:
Human factors: understanding how humans interact with systems, including physical and non-physical aspects (especially relating to social and interpersonal relations). These social-based factors include areas such as trust, communication, openness and level of reciprocity in relations. Interaction of the human element with the system is pivotal to capture but also interaction between the human actors in the system, i.e. peer-to-peer interaction.
Machine factors: understanding how the engineering system is designed and operates from a technical perspective, including the operating specification and performance levels. The machine factors involve the programming of the machine intelligence, including control algorithms and approaches such as instance and rule-based learning. The level of autonomy of the system can also be considered, i.e. autonomous or semi-autonomous.
Knowledge factors: understanding how data and information is created, communicated, utilized and stored in accordance with the engineering requirements for the system. The knowledge factors involve the data and information processed by the system, including both codified and tacit knowledge. Knowledge availability as well as knowledge integrity need to be considered and in the case of model-based systems an understanding of the deterministic or stochastic elements.
Process factors: understanding how engineering activities are undertaken for the system requirements to be delivered. The process factors involve both local procedures (i.e. standard operating procedures) as well as any national or international standards, such as ISO 9001. Appropriate project management and risk management factors can also be considered.
Figure 7 provides a conceptual framework to support multi-criteria HI-MI decision-making based on these four main groups of factors. The need for a HI-MI decision-making model is an emerging requirement that will benefit from an organizing framework in order to advance the research agenda and inform future studies as well as being a tool to support practitioners engaged in HI-MI decisions. See
Philbin and Kennedy (2014) for an example of a management framework developed to act as a diagnostic and health check tool for engineering and technology projects.
The framework includes proposed supporting criteria according to each of the four areas need to be considered for HI-MI decision-making applications. This approach of assembling multiple criteria is recognized for developing decision-making tools, for example, in the case of multi-criteria decision making for sustainable energy planning (
Pohekar and Ramachandran, 2004). The criteria included in the HI-MI decision framework have been developed through drawing on the insights generated in the case study in conjunction with the other cases from literature. Although this is a single case, since HI-MI decision-making is an emerging and important engineering management area to be considered, it is necessary to propose a framework that builds on the engineering case described in order to advance the understanding of this area and act as the basis for a future body of knowledge on HI-MI systems. It is recognized that the framework is an initial attempt to move forward the state of the art for the emerging subject of HI-MI decision-making. Consequently, further development of the framework will be required in due course, e.g. understanding the relative importance of the four main areas of factors and how they vary according to the circumstances of the engineering system (i.e. related to contingent theory). This contingency includes a temporal perspective of the HI-MI system, with the relative contributions of the four areas likely to change over time.
The decision-making framework can, for illustrative purposes, be applied to the case study investigation. This illustrates how the tool could have been used to support HI-MI decision-making and represents an initial validation of the utility of the conceptual framework. Considering the findings from the case study, it is possible to derive the main areas that should have been addressed by the engineering company. This is summarized in Table 1. This validation is achieved through qualitatively describing the factors from the case study according to the four main areas of the decision-making tool (i.e. human, machine, knowledge, and process-related factors). As can be seen, there are a number of factors that should have been considered. If this form of analysis was available to the engineering team and there was a suitable opportunity to present and analyze these findings, it is proposed by the authors that a different outcome may have been possible. If this model had been adapted, the client company could have saved USD $200,000 and ensured a more robust and reliable engineering solution was adopted.
In regard to the implications for engineering managers and HI-MI decisions, there needs to be a greater awareness of the supporting factors to be considered for HI-MI systems. Increasingly we are moving from systems based on either purely HI operations or purely MI operations to mixed mode systems. In these mixed mode systems, effective collaboration and cooperation between the HI and MI sub-systems is needed. Adopting a systems viewpoint is encouraged so a holistic perspective can be engaged when undertaking the potentially complex decision-making associated with HI-MI engineering applications. Engineering managers need to gain access to a new set of tools and techniques to help them navigate the HI-MI world.
Proposed research agenda on HI-MI engineering systems
We propose the following research agenda to inform the development of a new body of knowledge on HI-MI engineering systems. This research agenda builds on the aforementioned HI-MI literature and case study investigation as well as the conceptual framework that has been developed. The research agenda is provided in order to help guide other researchers interested in investigating HI-MI systems and the issues that need to be addressed. To allow prioritization, we have synthesized the key research areas to be developed over the short, medium and long-timeframes.
Short-timeframe research areas (0-2 years)
• Development of HI-MI decision support tools, including refinement of the proposed multi-criteria decision-making (MCDM) approach.
• Numerical models that simulate the performance of HI-MI engineering systems.
• Control and monitoring algorithms supporting adaptive applications of HI-MI engineering systems.
Medium-timeframe research areas (2-5 years)
• Planning tools to support HI-MI system integration in regard to design of organizational structures and processes.
• Comparative studies on HI-MI engineering systems from different industrial sectors, e.g. manufacturing, transport, telecommunications, etc.
• Understanding the role of tacit knowledge in HI-MI systems and methods to support codification of such knowledge.
Long-timeframe research areas (5-10 years)
• Learning algorithms that allow HI-MI engineering systems to self-improve across the system’s operational parameters.
• Understanding the societal implications of HI-MI system adoption and the impact on specialized expert job roles.
• Development of a new international standard to support the design and operation of HI-MI engineering systems.
Conclusions and future work
This article explores what began as anecdotal observations suggesting that MI is advancing faster than the HI is able to change to adequately assure desired outcomes will result. In the case study, the humans continually interacting with MI resisted challenges to the decisions made by MI. Those humans less familiar with the MI processes believed they could see flaws in the MI decisions and that they could explain why the errors were produced. The outcome of the case demonstrated that there were insufficient guidelines to resolve the differences between the results produced by the MI and HI experts. The two groups went separate ways each believing they were more correct than the other. Many questions remained unanswered such as who is ultimately responsible at a professional level if future problems occur with the final design. Although the client agreed to pay for the services they did not ultimately use, future similar incidences could result in contractual disputes.
The subject of HI-MI decision-making is an emerging area and consequently we have proposed an innovative conceptual framework to support HI-MI decisions based on a multi-criteria decision-making approach. This framework has been synthesized through considering the extensive literature sources that have been reviewed as part of this work along with the case study. To illustrate the areas to be considered when applying such a framework, the HI-MI decision-making tool was applied to the findings from the case study. In the rapidly changing work environment with HI-MI integration increasing in frequency and complexity, the future engineering manager will need tools to provide clear guidance to resolve conflicting results between the two systems. To be seen to add value to organizations, engineering management must quickly develop a body of knowledge to provide this guidance to HI-MI decision-making. Nevertheless the conceptual framework and research agenda proposed here provides an initial attempt to inform this body of knowledge and consequently will be useful for both researchers and practitioners concerned with understanding the emerging HI-MI system paradigm and the associated issues to be addressed.
Future work is suggested in a number of areas. It is proposed that empirical research is carried out in different industrial applications for HI-MI systems, for example, in construction, manufacturing, transport, telecommunications and financial services. Such sectors are seeing a rise in HI-MI systems and there is an emerging need for developing new quantitative models for engineering managers to support HI-MI decisions. Adopting systems approaches is also proposed as a tool to model HI-MI decisions through developing cause and effect models.
The Author(s) 2018. Published by Higher Education Press. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0)