Challenges of human–machine collaboration in risky decision-making

Wei XIONG , Hongmiao FAN , Liang MA , Chen WANG

Front. Eng ›› 2022, Vol. 9 ›› Issue (1) : 89 -103.

PDF (861KB)
Front. Eng ›› 2022, Vol. 9 ›› Issue (1) : 89 -103. DOI: 10.1007/s42524-021-0182-0
REVIEW ARTICLE
REVIEW ARTICLE

Challenges of human–machine collaboration in risky decision-making

Author information +
History +
PDF (861KB)

Abstract

The purpose of this paper is to delineate the research challenges of human–machine collaboration in risky decision-making. Technological advances in machine intelligence have enabled a growing number of applications in human–machine collaborative decision-making. Therefore, it is desirable to achieve superior performance by fully leveraging human and machine capabilities. In risky decision-making, a human decision-maker is vulnerable to cognitive biases when judging the possible outcomes of a risky event, whereas a machine decision-maker cannot handle new and dynamic contexts with incomplete information well. We first summarize features of risky decision-making and possible biases of human decision-makers therein. Then, we argue the necessity and urgency of advancing human–machine collaboration in risky decision-making. Afterward, we review the literature on human–machine collaboration in a general decision context, from the perspectives of human–machine organization, relationship, and collaboration. Lastly, we propose challenges of enhancing human–machine communication and teamwork in risky decision-making, followed by future research avenues.

Graphical abstract

Keywords

human–machine collaboration / risky decision-making / human–machine team and interaction / task allocation / human–machine relationship

Cite this article

Download citation ▾
Wei XIONG, Hongmiao FAN, Liang MA, Chen WANG. Challenges of human–machine collaboration in risky decision-making. Front. Eng, 2022, 9(1): 89-103 DOI:10.1007/s42524-021-0182-0

登录浏览全文

4963

注册一个新账户 忘记密码

References

[1]

Akash K, Hu W L, Reid T, Jain N (2017). Dynamic modeling of trust in human–machine interactions. In: American Control Conference (ACC). Seattle, WA: IEEE, 1542–1548

[2]

Amann J, Blasimme A, Vayena E, Frey D, Madai V I (2020). Explainability for artificial intelligence in healthcare: A multidisciplinary perspective. BMC Medical Informatics and Decision Making, 20(1): 310

[3]

Apel H, Thieken A H, Merz B, Blöschl G (2004). Flood risk assessment and associated uncertainty. Natural Hazards and Earth System Sciences, 4(2): 295–308

[4]

Bedford T, Cooke R (2001). Probabilistic Risk Analysis: Foundations and Methods. Cambridge: Cambridge University Press

[5]

Bell D E (1982). Regret in decision making under uncertainty. Operations Research, 30(5): 961–981

[6]

Bhardwaj A, Ghasemi A H, Zheng Y, Febbo H, Jayakumar P, Ersal T, Stein J L, Gillespie R B (2020). Who’s the boss? Arbitrating control authority between a human driver and automation system. Transportation Research Part F: Traffic Psychology and Behaviour, 68: 144–160

[7]

Bier V (2004). Implications of the research on expert overconfidence and dependence. Reliability Engineering & System Safety, 85(1–3): 321–329

[8]

Bier V M, Haimes Y Y, Lambert J H, Matalas N C, Zimmerman R (1999). A survey of approaches for assessing and managing the risk of extremes. Risk Analysis, 19(1): 83–94

[9]

Blumenthal-Barby J S, Krieger H (2015). Cognitive biases and heuristics in medical decision making: A critical review using a systematic search strategy. Medical Decision Making, 35(4): 539–557

[10]

Bradley J V (1954). Desirable control-display relationshipsfor moving-scale instruments. Technical Report 54–423. Dayton, OH: US Air Force, Wright Air Development Center (WADC)

[11]

Broomell S B, Budescu D V (2009). Why are experts correlated? Decomposing correlations between judges. Psychometrika, 74(3): 531–553

[12]

Cadario R, Longoni C, Morewedge C K (2021). Understanding, explaining, and utilizing medical artificial intelligence. Nature Human Behaviour, in press, doi: 10.1038/s41562-021-01146-0

[13]

Calhoun G L, Ruff H A, Behymer K J, Frost E M (2018). Human-autonomy teaming interface design considerations for multi-unmanned vehicle control. Theoretical Issues in Ergonomics Science, 19(3): 321–352

[14]

Cannon-Bowers J A, Salas E, Converse S (1993). Shared mental models in expert team decision making. In: Castellan Jr N J, ed. Individual and Group Decision Making. New York: Taylor & Francis Psychology Press, 221–246

[15]

Charness G, Karni E, Levin D (2007). Individual and group decision making under risk: An experimental study of Bayesian updating and violations of first-order stochastic dominance. Journal of Risk and Uncertainty, 35(2): 129–148

[16]

Chen G, Kim K A, Nofsinger J R, Rui O M (2007). Trading performance, disposition effect, overconfidence, representativeness bias, and experience of emerging market investors. Journal of Behavioral Decision Making, 20(4): 425–451

[17]

Chen J Y C, Barnes M J (2014). Human–agent teaming for multirobot control: A review of human factors issues. IEEE Transactions on Human–Machine Systems, 44(1): 13–29

[18]

Chen J Y C, Lakhmani S G, Stowers K, Selkowitz A R, Wright J L, Barnes M (2018). Situation awareness-based agent transparency and human–autonomy teaming effectiveness. Theoretical Issues in Ergonomics Science, 19(3): 259–282

[19]

Chignell M H, Hancock P A (1986). Knowledge-based load leveling and task allocation in human–machine systems. In: 21st Annual Conference on Manual Control. Moffett Field, CA: NASA Ames Research Center, 9

[20]

Cokely E T, Kelley C M (2009). Cognitive abilities and superior decision making under risk: A protocol analysis and process model evaluation. Judgment and Decision Making, 4(1): 20–33

[21]

Cramer H, Evers V, Ramlal S, van Someren M, Rutledge L, Stash N, Aroyo L, Wielinga B (2008). The effects of transparency on trust in and acceptance of a content-based art recommender. User Modeling and User-Adapted Interaction, 18(5): 455–496

[22]

Croskerry P (2013). From mindless to mindful practice — Cognitive bias and clinical decision making. New England Journal of Medicine, 368(26): 2445–2448

[23]

Dafoe A, Bachrach Y, Hadfield G, Horvitz E, Larson K, Graepel T (2021). Cooperative AI: Machines must learn to find common ground. Nature, 593(7857): 33–36

[24]

Damacharla P, Javaid A Y, Gallimore J J, Devabhaktuni V K (2018). Common metrics to benchmark Human–Machine Teams (HMT): A review. IEEE Access, 6: 38637–38655

[25]

DARPA (2018). AI Next Campaign. Available at:

[26]

Daugherty P R, Wilson H J (2018). Human+ Machine: Reimagining Work in the Age of AI. Boston: Harvard Business Review Press

[27]

Davis F D, Bagozzi R P, Warshaw P R (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35(8): 982–1003

[28]

Dawes R M, Faust D, Meehl P E (1989). Clinical versus actuarial judgment. Science, 243(4899): 1668–1674

[29]

de Visser E J, Pak R, Shaw T H (2018). From “automation” to “autonomy”: The importance of trust repair in human–machine interaction. Ergonomics, 61(10): 1409–1427

[30]

Deck C, Jahedi S (2015). The effect of cognitive load on economic decision making: A survey and new experiments. European Economic Review, 78: 97–119

[31]

Degani A, Goldman C V, Deutsch O, Tsimhoni O (2017). On human–machine relations. Cognition Technology and Work, 19(2–3): 211–231

[32]

Dietvorst B J, Simmons J P, Massey C (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology, 144(1): 114–126

[33]

Doherty E, Cockton G, Bloor C, Benigno D (2001). Improving the performance of the cyberlink mental interface with the “Yes/No Program”. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. New York: ACM, 69–76

[34]

Doherty E, Stephenson G, Engel W (2000). Using a cyberlink mental interface for relaxation and controlling a robot. In: Proceedings of the SIGCAPH Computers and the Physically Handicapped. New York: ACM, 4–9

[35]

Dörner D, Wearing A J (1995). Complex problem solving: Toward a (computer simulated) theory. In: Frensch P A, Funke J, eds. Complex Problem Solving: The European Perspective. New York: Taylor & Francis Psychology Press, 65–99

[36]

Du N, Haspiel J, Zhang Q, Tilbury D, Pradhan A K, Yang X J, Robert Jr L P (2019). Look who’s talking now: Implications of AV’s explanations on driver’s trust, AV preference, anxiety and mental workload. Transportation Research Part C: Emerging Technologies, 104: 428–442

[37]

Duan Y, Edwards J S, Dwivedi Y K (2019). Artificial intelligence for decision making in the era of Big Data: Evolution, challenges and research agenda. International Journal of Information Management, 48: 63–71

[38]

Dubois C, Le Ny J (2020). Adaptive task allocation in human–machine teams with trust and workload cognitive models. In: IEEE International Conference on Systems, Man, and Cybernetics (SMC). Toronto, ON, 3241–3246

[39]

Edmonds M, Gao F, Liu H, Xie X, Qi S, Rothrock B, Zhu Y X, Wu Y N, Lu H J, Zhu S C (2019). A tale of two explanations: Enhancing human trust by explaining robot behavior. Science Robotics, 4(37): eaay4663

[40]

Edwards W (1962). Subjective probabilities inferred from decisions. Psychological Review, 69(2): 109–135

[41]

El-Gamal M A, Grether D M (1995). Are people Bayesian? Uncovering behavioral strategies. Journal of the American Statistical Association, 90(432): 1137–1145

[42]

Endsley M R (1988). Situation awareness global assessment technique (SAGAT). In: Proceedings of the IEEE National Aerospace and Electronics Conference. Dayton, OH, 789–795

[43]

Endsley M R (1995). Toward a theory of situation awareness in dynamic systems. Human Factors, 37(1): 32–64

[44]

Ferrari V (2019). Man–machine teaming: Towards a new paradigm of man–machine collaboration? In: Barbaroux P, ed. Disruptive Technology and Defence Innovation Ecosystems, vol. 5. Hoboken, NJ: John Wiley & Sons, 121–137

[45]

Fishbein M, Ajzen I (1975). Belief, Attitude, Intention and Behavior: An Introduction to Theory and Research. Boston, MA: Addison-Wesley Publishing Company

[46]

Fitts P M (1951). Human Engineering for An Effective Air-Navigation and Traffic Control System. Washington, DC: National Research Council

[47]

Fitts P M, Seeger C M (1953). S-R compatibility: Spatial characteristics of stimulus and response codes. Journal of Experimental Psychology, 46(3): 199–210

[48]

Flemisch F, Heesen M, Hesse T, Kelsch J, Schieben A, Beller J (2012). Towards a dynamic balance between humans and automation: Authority, ability, responsibility and control in shared and cooperative control situations. Cognition Technology and Work, 14(1): 3–18

[49]

Gentner D (2001). Mental models, psychology of. In: Smelser N J, Baltes P B, eds. International Encyclopedia of the Social & Behavioral Sciences. Amsterdam: Elsevier, 9683–9687

[50]

Goodrich M A, Yi D (2013). Toward task-based mental models of human–robot teaming: A Bayesian approach. In: International Conference on Virtual, Augmented and Mixed Reality. Designing and Developing Augmented and Virtual Environments. Berlin, Heidelberg: Springer, 267–276

[51]

Gregory R, Slovic P, Flynn J (1996). Risk perceptions, stigma, and health policy. Health & Place, 2(4): 213–220

[52]

Grether D M (1992). Testing Bayes rule and the representativeness heuristic: Some experimental evidence. Journal of Economic Behavior & Organization, 17(1): 31–57

[53]

Griffiths T L, Tenenbaum J B (2006). Optimal predictions in everyday cognition. Psychological Science, 17(9): 767–773

[54]

Gunning D (2016). Explainable Artificial Intelligence (XAI) — What are we trying to do? Available at:

[55]

Gursoy D, Chi O H, Lu L, Nunkoo R (2019). Consumers acceptance of artificially intelligent (AI) device use in service delivery. International Journal of Information Management, 49: 157–169

[56]

Gutzwiller R S, Reeder J (2021). Dancing with algorithms: Interaction creates greater preference and trust in machine-learned behavior. Human Factors, 63(5): 854–867

[57]

Haesevoets T, de Cremer D, Dierckx K, van Hiel A (2021). Human–machine collaboration in managerial decision making. Computers in Human Behavior, 119: 106730

[58]

Hancock P A, Kajaks T, Caird J K, Chignell M H, Mizobuchi S, Burns P C, Feng J, Fernie G R, Lavallière M, Noy I Y, Redelmeier D A, Vrkljan B H (2020). Challenges to human drivers in increasingly automated vehicles. Human Factors, 62(2): 310–328

[59]

Hancock P A, Billings D R, Schaefer K E, Chen J Y C, de Visser E J, Parasuraman R (2011). A meta-analysis of factors affecting trust in human–robot interaction. Human Factors, 53(5): 517–527

[60]

Hancock P A, Chignell M H (1989). Intelligent Interfaces: Theory, Research and Design. North Holland: Elsevier Science Inc.

[61]

Hoc J M (2000). From human–machine interaction to human–machine cooperation. Ergonomics, 43(7): 833–843

[62]

Hoff K A, Bashir M (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3): 407–434

[63]

Holzinger A (2016). Interactive machine learning for health informatics: When do we need the human-in-the-loop? Brain Informatics, 3(2): 119–131

[64]

Hunt R G, Krzystofiak F J, Meindl J R, Yousry A M (1989). Cognitive style and decision making. Organizational Behavior and Human Decision Processes, 44(3): 436–453

[65]

Jarrahi M H (2018). Artificial intelligence and the future of work: Human–AI symbiosis in organizational decision making. Business Horizons, 61(4): 577–586

[66]

Johnson-Laird P (1996). Mental models, deductive reasoning, and the brain. In: Gazzaniga M S, ed. The Cognitive Neurosciences. Cambridge, MA: The MIT Press, 999–1008

[67]

Kahneman D, Frederick S (2002). Representativeness revisited: Attribute substitution in intuitive judgment. In: Gilovich T, Griffin D, Kahneman D, eds. Heuristics and Biases: The Psychology of Intuitive Judgment. Cambridge: Cambridge University Press, 49–81

[68]

Kahneman D, Tversky A (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47(2): 263–291

[69]

Karstens C D, Correia Jr J, LaDue D S, Wolfe J, Meyer T C, Harrison D R, Cintineo J L, Calhoun K M, Smith T M, Gerard A E, Rothfusz L P (2018). Development of a human–machine mix for forecasting severe convective events. Weather and Forecasting, 33(3): 715–737

[70]

Kemp C, Tenenbaum J B (2008). The discovery of structural form. Proceedings of the National Academy of Sciences of the United States of America, 105(31): 10687–10692

[71]

Kraus J, Scholz D, Stiegemeier D, Baumann M (2020). The more you know: Trust dynamics and calibration in highly automated driving and the effects of take-overs, system malfunction, and system transparency. Human Factors, 62(5): 718–736

[72]

Kreye M E, Goh Y M, Newnes L B, Goodwin P (2012). Approaches to displaying information to assist decisions under uncertainty. Omega, 40(6): 682–692

[73]

Kulesza T, Wong W K, Stumpf S, Perona S, White R, Burnett M M, Oberst I, Ko A J (2009). Fixing the program my computer learned: Barriers for end users, challenges for the machine. In: Proceedings of the 14th International Conference on Intelligent User Interfaces. Sanibel Island, FL: ACM, 187–196

[74]

Kunnathuvalappil Hariharan N (2018). Artificial Intelligence and human collaboration in financial planning. Journal of Emerging Technologies and Innovative Research, 5(7): 1348–1355

[75]

Kuo I H, Rabindran J M, Broadbent E, Lee Y I, Kerse N, Stafford R M Q, MacDonald B A (2009). Age and gender factors in user acceptance of healthcare robots. In: The 18th IEEE International Symposium on Robot and Human Interactive Communication. Toyama, 214–219

[76]

Laid J, Ranganath C, Gershman S (2020). Future directions in human machine teaming workshop. Arlington, VA: US Department of Defense

[77]

Lee J (2020). Is artificial intelligence better than human clinicians in predicting patient outcomes? Journal of Medical Internet Research, 22(8): e19918

[78]

Lee J, Moray N (1992). Trust, control strategies and allocation of func-tion in human–machine systems. Ergonomics, 35(10): 1243–1270

[79]

Lee J D, See K A (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1): 50–80

[80]

Li F F, Etchemendy J (2018). Introducing Stanford’s human-centered AI initiative. Available at:

[81]

Luce R D, Fishburn P C (1991). Rank- and sign-dependent linear utility models for finite first-order gambles. Journal of Risk and Uncertainty, 4(1): 29–59

[82]

Lyn Paul C, Blaha L M, Fallon C K, Gonzalez C, Gutzwiller R S (2019). Opportunities and challenges for human–machine teaming in cybersecurity operations. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 63(1): 442–446

[83]

Lyons J B, Havig P R (2014). Transparency in a human–machine context: Approaches for fostering shared awareness/intent. In: International Conference on Virtual, Augmented and Mixed Reality. Designing and Developing Virtual and Augmented Environments. Cham: Springer, 181–190

[84]

Lyons J B, Mahoney S, Wynne K T, Roebke M A (2018). Viewing machines as teammates: A qualitative study. In: AAAI Spring Symposium Series. Palo Alto, CA, 166–170

[85]

Madhavan P, Wiegmann D A (2007). Similarities and differences between human–human and human–automation trust: An integrative review. Theoretical Issues in Ergonomics Science, 8(4): 277–301

[86]

March J G, Shapira Z (1987). Managerial perspectives on risk and risk taking. Management Science, 33(11): 1404–1418

[87]

McGuirl J M, Sarter N B (2006). Supporting trust calibration and the effective use of decision aids by presenting dynamic system confidence information. Human Factors, 48(4): 656–665

[88]

Mearman A (2011). Who do heterodox economists think they are? American Journal of Economics and Sociology, 70(2): 480–510

[89]

Miller A P (2018). Want less-biased decisions? Use algorithms. Harvard Business Review, 2018–7–26

[90]

Ordóñez L D, Benson III L, Pittarello A (2015). Time-pressure perception and decision making. In: Keren G, Wu G, eds. The Wiley Blackwell Handbook of Judgment and Decision Making, II. Hoboken, NJ: John Wiley & Sons, 517–542

[91]

Ortiz C A, Park M R (2011). Visual Controls: Applying Visual Management to the Factory. Boca Raton: Taylor & Francis Productivity Press

[92]

Ososky S, Schuster D, Jentsch F, Fiore S, Shumaker R, Lebiere C, Kurup U, Oh J, Stentz A (2012). The importance of shared mental models and shared situation awareness for transforming robots from tools to teammates. In: Proceedings of SPIE 8387, Unmanned Systems Technology XIV. Baltimore, MD, 838710

[93]

Ososky S, Schuster D, Phillips E, Jentsch F (2013). Building appropriate trust in human–robot teams. In: AAAI Spring Symposium: Trust and Autonomous Systems. Stanford, CA: Association for the Advancement of Artificial Intelligence, 60–65

[94]

Parasuraman R, Sheridan T B, Wickens C D (2000). A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics Part A: Systems and Humans, 30(3): 286–297

[95]

Parker S, Grote G (2019). Automation, algorithms, and beyond: Why work design matters more than ever in a digital world. Applied Psychology, in press, doi: 10.1111/apps.12241

[96]

Patel B N, Rosenberg L, Willcox G, Baltaxe D, Lyons M, Irvin J, Rajpurkar P, Amrhein T, Gupta R, Halabi S, Langlotz C, Lo E, Mammarappallil J, Mariano A J, Riley G, Seekins J, Shen L, Zucker E, Lungren M P (2019). Human–machine partnership with artificial intelligence for chest radiograph diagnosis. NPJ Digital Medicine, 2: 111

[97]

Payne J W, Bettman J R, Johnson E J (1993). The Adaptive Decision Maker. Cambridge: Cambridge University Press

[98]

Phillips E, Ososky S, Grove J, Jentsch F (2011). From tools to teammates: Toward the development of appropriate mental models for intelligent robots. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 55(1): 1491–1495

[99]

Rahwan I, Cebrian M, Obradovich N, Bongard J, Bonnefon J F, Breazeal C, Crandall J W, Christakis N A, Couzin I D, Jackson M O, Jennings N R, Kamar E, Kloumann I M, Larochelle H, Lazer D, McElreath R, Mislove A, Parkes D C, Pentland A S, Roberts M E, Shariff A, Tenenbaum J B, Wellman M (2019). Machine behaviour. Nature, 568(7753): 477–486

[100]

Renooij S (2001). Probability elicitation for belief networks: Issues to consider. Knowledge Engineering Review, 16(3): 255–269

[101]

Roth E M, Sushereba C, Militello L G, Diiulio J, Ernst K (2019). Function allocation considerations in the era of human autonomy teaming. Journal of Cognitive Engineering and Decision Making, 13(4): 199–220

[102]

Saenz M J, Revilla E, Simón C (2020). Designing AI systems with human–machine teams. MIT Sloan Management Review, 61(3): 1–5

[103]

Salem M, Lakatos G, Amirabdollahian F, Dautenhahn K (2015). Would you trust a (faulty) robot: Effects of error, task type and personality on human–robot cooperation and trust. In: 10th ACM/IEEE International Conference on Human–Robot Interaction. Portland, OR, 141–148

[104]

Salmon P M, Stanton N A, Walker G H, Baber C, Jenkins D P, McMaster R, Young M S (2008). What really is going on? Review of situation awareness models for individuals and teams. Theoretical Issues in Ergonomics Science, 9(4): 297–323

[105]

Schaefer K E, Chen J Y C, Szalma J L, Hancock P A (2016). A meta-analysis of factors influencing the development of trust in automation. Human Factors, 58(3): 377–400

[106]

Schaefer K E, Straub E R, Chen J Y C, Putney J, Evans III A W (2017). Communicating intent to develop shared situation awareness and engender trust in human-agent teams. Cognitive Systems Research, 46: 26–39

[107]

Seeber I, Bittner E, Briggs R O, de Vreede T, de Vreede G J, Elkins A, Maier R, Merz A B, Oeste-Reiß S, Randrup N, Schwabe G, Söllner M (2020). Machines as teammates: A research agenda on AI in team collaboration. Information & Management, 57(2): 103174

[108]

Seeber I, Waizenegger L, Seidel S, Morana S, Benbasat I, Lowry P B (2019). Reinventing collaboration with autonomous technology-based agents. In: Proceedings of the 27th European Conference on Information Systems (ECIS). Stockholm: Association for Information Systems, 4

[109]

Selkowitz A R, Lakhmani S G, Larios C N, Chen J Y C (2016). Agent transparency and the autonomous squad member. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 60(1): 1319–1323

[110]

Seong Y, Bisantz A M (2008). The impact of cognitive feedback on judgment performance and trust with decision aids. International Journal of Industrial Ergonomics, 38(7–8): 608–625

[111]

Sheridan T B, Hennessy R T (1984). Research and modeling of supervisory control behavior: Report of a workshop. Washington, DC: The National Academies Press, US National ResearchCouncil

[112]

Shin D (2020). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human–Computer Studies, 146: 102551

[113]

Shin D, Park Y J (2019). Role of fairness, accountability, and transparency in algorithmic affordance. Computers in Human Behavior, 98: 277–284

[114]

Silver D, Huang A, Maddison C J, Guez A, Sifre L, van den Driessche G, Schrittwieser J, Antonoglou I, Panneershelvam V, Lanctot M, Dieleman S, Grewe D, Nham J, Kalchbrenner N, Sutskever I, Lillicrap T, Leach M, Kavukcuoglu K, Graepel T, Hassabis D (2016). Mastering the game of Go with deep neural networks and tree search. Nature, 529(7587): 484–489

[115]

Silver D, Schrittwieser J, Simonyan K, Antonoglou I, Huang A, Guez A, Hubert T, Baker L, Lai M, Bolton A, Chen Y, Lillicrap T, Hui F, Sifre L, van den Driessche G, Graepel T, Hassabis D (2017). Mastering the game of Go without human knowledge. Nature, 550(7676): 354–359

[116]

Simon D, Krawczyk D C, Holyoak K J (2004). Construction of preferences by constraint satisfaction. Psychological Science, 15(5): 331–336

[117]

Skraaning G, Jamieson G A (2019). Human performance benefits of the automation transparency design principle: Validation and variation. Human Factors, 63(3): 379–401

[118]

Speier C (2006). The influence of information presentation formats on complex task decision-making performance. International Journal of Human–Computer Studies, 64(11): 1115–1131

[119]

Speier C, Morris M G (2003). The influence of query interface design on decision-making performance. Management Information Systems Quarterly, 27(3): 397–423

[120]

Stowers K, Kasdaglis N, Newton O, Lakhmani S, Wohleber R, Chen J (2016). Intelligent agent transparency: The design and evaluation of an interface to facilitate human and intelligent agent collaboration. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 60(1): 1706–1710

[121]

Tenenbaum J B, Kemp C, Griffiths T L, Goodman N D (2011). How to grow a mind: Statistics, structure, and abstraction. Science, 331(6022): 1279–1285

[122]

Tetlock P E (2003). Thinking the unthinkable: Sacred values and taboo cognitions. Trends in Cognitive Sciences, 7(7): 320–324

[123]

Tong J, Feiler D (2017). A behavioral model of forecasting: Naive statistics on mental samples. Management Science, 63(11): 3609–3627

[124]

Topol E J (2019). High-performance medicine: The convergence of human and artificial intelligence. Nature Medicine, 25(1): 44–56

[125]

Tschandl P, Rinner C, Apalla Z, Argenziano G, Codella N, Halpern A, Janda M, Lallas A, Longo C, Malvehy J, Paoli J, Puig S, Rosendahl C, Soyer H P, Zalaudek I, Kittler H (2020). Human–computer collaboration for skin cancer recognition. Nature Medicine, 26(8): 1229–1234

[126]

Tversky A, Kahneman D (1974). Judgment under uncertainty: Heuristics and biases. Science, 185(4157): 1124–1131

[127]

Urlings P, Jain L C (2002). Teaming human and machine: A conceptual framework. In: Abraham A, Köppen M, eds. Hybrid Information Systems. Heidelberg: Springer, 711–721

[128]

van Maanen P P, van Dongen K (2005). Towards task allocation decision support by means of cognitive modeling of trust. In: Proceedings of 17th Belgian-Netherlands Artificial Intelligence Conference. Brussels, 399–400

[129]

Venkatesh V, Thong J Y L, Xu X (2012). Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. Management Information Systems Quarterly, 36(1): 157–178

[130]

von Neumann J, Morgenstern O (1944). Theory of Games and Economic Behavior. Princeton: Princeton University Press

[131]

Vosgerau G (2006). The perceptual nature of mental models. Advances in Psychology, 138: 255–275

[132]

Wakker P (1989). Continuous subjective expected utility with non-additive probabilities. Journal of Mathematical Economics, 18(1): 1–27

[133]

Wang N, Pynadath D V, Hill S G (2016). Trust calibration within a human–robot team: Comparing automatically generated explanations. In: The 11th ACM/IEEE International Conference on Human–Robot Interaction. Christchurch, 109–116

[134]

Warden T, Carayon P, Roth E M, Chen J, Clancey W J, Hoffman R, Steinberg M L (2019). The national academies board on human system integration (BOHSI) panel: Explainable AI, system trans-parency, and human machine teaming. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 63(1): 631–635

[135]

Whelehan D F, Conlon K C, Ridgway P F (2020). Medicine and heuristics: Cognitive biases and medical decision-making. Irish Journal of Medical Science, 189(4): 1477–1484

[136]

Wickens C D, Hollands J G, Banbury S, Parasuraman R (2013). Engineering Psychology and Human Performance, 4th ed. New York: Taylor & Francis Psychology Press

[137]

Wickham P A (2003). The representativeness heuristic in judgements involving entrepreneurial success and failure. Management Decision, 41(2): 156–167

[138]

Wynne K T, Lyons J B (2018). An integrative model of autonomous agent teammate-likeness. Theoretical Issues in Ergonomics Science, 19(3): 353–374

[139]

Xu W (2019). Towards human-centered AI: A perspective from human–computer interaction. Interaction, 26(4): 42–46

[140]

Yalçın Ö N, DiPaola S (2020). Modeling empathy: Building a link between affective and cognitive processes. Artificial Intelligence Review, 53(4): 2983–3006

[141]

Zinn J O (2008). Heading into the unknown: Everyday strategies for managing risk and uncertainty. Health Risk & Society, 10(5): 439–450

RIGHTS & PERMISSIONS

The Author(s) 2022. This article is published with open access at link.springer.com and journal.hep.com.cn

AI Summary AI Mindmap
PDF (861KB)

10428

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/