Convergence to real-time decision making

James M. TIEN

Front. Eng ›› 2020, Vol. 7 ›› Issue (2) : 204 -222.

PDF (571KB)
Front. Eng ›› 2020, Vol. 7 ›› Issue (2) : 204 -222. DOI: 10.1007/s42524-019-0040-5
RESEARCH ARTICLE
RESEARCH ARTICLE

Convergence to real-time decision making

Author information +
History +
PDF (571KB)

Abstract

Real-time decision making reflects the convergence of several digital technologies, including those concerned with the promulgation of artificial intelligence and other advanced technologies that underpin real-time actions. More specifically, real-time decision making can be depicted in terms of three converging dimensions: Internet of Things, decision making, and real-time. The Internet of Things include tangible goods, intangible services, ServGoods, and connected ServGoods. Decision making includes model-based analytics (since before 1990), information-based Big Data (since 1990), and training-based artificial intelligence (since 2000), and it is bolstered by the evolving real-time technologies of sensing (i.e., capturing streaming data), processing (i.e., applying real-time analytics), reacting (i.e., making decisions in real-time), and learning (i.e., employing deep neural networks). Real-time includes mobile networks, autonomous vehicles, and artificial general intelligence. Central to decision making, especially real-time decision making, is the ServGood concept, which the author introduced in an earlier paper (2012). It is a physical product or good encased by a services layer that renders the good more adaptable and smarter for a specific purpose or use. Addition of another communication sensors layer could further enhance its smartness and adaptiveness. Such connected ServGoods constitute a solid foundation for the advanced products of tomorrow which can further display their growing intelligence through real-time decisions.

Keywords

real-time decision making / services / goods / ServGoods / Big Data / Internet of Things / artificial intelligence / wireless communications

Cite this article

Download citation ▾
James M. TIEN. Convergence to real-time decision making. Front. Eng, 2020, 7(2): 204-222 DOI:10.1007/s42524-019-0040-5

登录浏览全文

4963

注册一个新账户 忘记密码

Internet of Things

The paper details the technologies that are converging to make real-time decision making (DM) a greater reality. More specifically and as indicated in Fig. 1, real-time DM can be depicted in terms of three converging dimensions. The first dimension is about the Internet of Things (IoT), which include tangible goods, intangible services, ServGoods––that Tien (2012; 2015) defines as services-encased goods––and connected ServGoods. The second dimension is about DM, which includes model-based analytics (since before 1990), information-based Big Data (since 1990), and training-based artificial intelligence (AI) (since 2000), and additionally embraces the evolving technologies of sensing (i.e., capturing streaming data), processing (i.e., applying real-time analytics), reacting (i.e., making real-time decisions), and learning (i.e., employing deep neural networks). The third dimension is about real-time, which includes mobile networks, autonomous vehicles (AVs), and artificial general intelligence (AGI).

The remainder of Section 1 identifies the physical devices and service processes which endeavor to obtain the necessary data, to appropriately process the data, and then to make the informed decisions; Section 2 details the DM process; Section 3 concerns real-time issues; and Section 4 closes with several related observations. As alluded to earlier, the IoT are actually connected ServGoods. Thus, IoT are ServGood systems or devices, each with an Internet Protocol (IP) address to communicate within LANs (Local Area Networks), WANs (Wide Area Networks), or FAANG (Facebook, Apple, Amazon, Netflix and Alphabet’s Google) and BAT (Baidu, Alibaba, Tencent) clouds. To begin, what is the definition of a ServGood or, more appropriately, a connected ServGood? In Fig. 2, a ServGood is depicted as a physical or tangible good encased or enveloped within a non-physical or intangible services layer that makes the good smarter or renders it more customizable (i.e., adaptable) for a specific purpose or use. Another layer of communication sensors would allow for interactions between and among the Internet-connected network of ServGoods. Actually, sensors can be regarded as just ServGoods or devices that can detect events or changes and respond, if necessary, through appropriate optical or electrical output signals. Sensors can likewise enhance the five traditional senses of sight, hearing, taste, smell, and touch. Thus, while Fig. 2 illustrates a static or stationary three-dimensional (3D) internet of ServGoods, Fig. 3 illustrates a temporally dynamic internet of ServGoods, within which the ServGoods are able to move in the 4th time dimension.

Table 1 compares the tangible goods with the intangible services; it further distinguishes between ServGoods and connected ServGoods. Indeed, ServGoods are becoming the preeminent innovations of the 21st Century. In fact, in the MIT Technology Review (MIT, 2017) and out of the top 50 “smartest” companies, Table 2 reports that 5 (i.e., Nvidia, Amazon, Alphabet/Google, iFlytek, and Tencent) or half of the list’s top ten companies are known for connected ServGood products, all with an AI focus. Interestingly and for the first time, two of the top 10 are Chinese companies: iFlytek is dedicated to the development of human speech and Tencent is a holding company for a diverse set of businesses, including e-commerce, video gaming, real estate, virtual reality, robotics, ride-sharing, banking, etc. Table 3 identifies the nature of the ServGood connection or interaction: human-to-human (e.g., through cell phone); machine-to-machine (e.g., through IoT); machine-to-human (e.g., through video conferencing); and human-to-machine (e.g., through e-commerce).

Before addressing the DM process in the next section, it is insightful to consider it from a 3-tier perspective, as illustrated in Fig. 4. In the first, “data” tier, raw data are processed into information, which can then, if appropriate, be processed into knowledge, and finally, if further appropriate, into wisdom. In the second, DM tier, it is seen that data can support operational DM, information can support tactical DM, knowledge can support strategic DM, and wisdom can support systemic DM. Finally, in the third, “time” tier, the influence of time is highlighted: that is, real-time corresponds to events that can occur in intervals of time which approach zero, while steady-state corresponds to events that can occur when time approaches infinity and the corresponding event values exist and are finite.

Finally, it should be noted that the IoT is itself in a constant state of flux or reconfiguration. For example, efficient micro services can be combined with edge computing, whereby compute power can be accessed, as needed. Moreover, different micro services could share a common hardware device (e.g., camera).

Decision making

Too often decision makers have to act with incomplete data or under uncertainty. In either case, as indicated in Section 1 and depicted in Fig. 5(a), real-time DM is framed by four steps: sensing, processing, reacting, and learning. As detailed in Tien (2003), the feedback loops in Fig. 5 serve to refine and update or inform the four steps; Ross (2015), for example, defines a “robo-pancreas” ServGood as one that can simultaneously sense, process, react and learn about diabetic recognition and mitigation.

•Sensing. Every human can be considered to be a giant multi-sensor, accumulating a vast volume of data over time and space, ranging from metadata to, respectively, raw data, processed data, information, knowledge and/or wisdom. Data are being obtained by both qualitative (i.e., mostly human) and quantitative (i.e., mostly electronic) sensors, ranging from conducted surveys to digital recordings. Clearly, all of this data can accumulate very quickly in both time and space; as indicated in Fig. 6, the digital data that have been generated and replicated every year have exploded from 1 zettabyte in 2010 to an estimated 35 zettabytes in 2020. Note that a byte is a basic digital unit containing 8 binary bits, which can include up to 2 raised to the power of 8 values (i.e., 0 to 255); furthermore, a kilobyte is equal to 1000**1 bytes, a megabyte is 1000**2 bytes, a gigabyte is 1000**3 bytes, a terabyte is 1000**4 bytes, a petabyte is 1000**5 bytes, an exabyte is 1000**6 bytes, a zettabyte is 1000**7 bytes, and a yottabyte is 1000**8 bytes. In 2019, the expected data volume is about 25 zettabytes.

•Processing. To date, near the end of the first two decades in the 21st Century, the explosion in visual, audio, and other forms of digital data is posing a potentially overwhelming problem. Fortunately, continued electronic innovations have been able to keep pace with Moore’s law, resulting in storage and CPU (central processing unit) capacities becoming cheaper, larger, faster, and smarter. At present, Big Data and its more recent cousin, AI, are actually being employed as competitive advantages, whereby the accumulated data are judiciously being processed to better manage supply chains, to yield new insights about customers, and to identify new areas of business opportunities. In sum, Big Data and AI have been able to improve services, products and processes by enabling more real-time, and informed, decisions (Tien, 2014; 2017). Moreover, the real-time mining of streaming data has contributed to both enhanced data accuracy and timely DM.

•Reacting. When considering and managing connected ServGoods, it is critical to act and react in real-time; this is especially necessary at the operational level, where humans can at best react in seconds while software-driven actuators can react in nanoseconds. Nevertheless, both humans and software programs must still respond within the decision informatics paradigm depicted in Fig. 5(a). It should be noted that the paradigm mirrors the evidence- or risk-based protocols that are reflective of the needs in, say, healthcare, transportation and finance. Human-information interface is considered as another critical element of decision informatics; it is, moreover, a vital aspect of systems engineering.

•Learning. As indicated in Fig. 5(a), the decision informatics paradigm subsumes the critical feedback or learning step, akin to critically updating a ServGood’s knowledge base in preparation for future actions or reactions. The learning step is typically an adapt-as-you-go process. For example, while a robot could, at the beginning, simply obey Asimov’s initial Three Laws of Robotics (Asimov, 1950), it must learn additional principles so that it can satisfactorily resolve, as an example, the well-known “trolley” dilemma (i.e., a trolley that is hurtling towards a group of bystanders could be diverted by an attentive engineer onto a different path where only one person is standing, question: is it appropriate for the engineer to play God and decide who lives and who dies?). As further discussed in Section 3.2, AVs will, of course, constantly face such trolley dilemmas. As another example, England’s House of Lords Select Committee on Artificial Intelligence (2018) just recently identified an AI bill of principles or, in essence, limitations, which are listed below.

1) AI should be developed for the common good and benefit of humanity;

2) AI should operate on principles of intelligibility and fairness;

3) AI should not be used to diminish the data rights or privacy of individuals, families or communities;

4) All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside AI;

5) The autonomous power to hurt, destroy or deceive human beings should never be vested in AI.

Clearly, the five stated principles serve to somewhat treat AI entities, together with their ethical demeanor, as legal personas, subject to the law.

Model-based analytics

Over time, the critical DM methods of model-based analytics has also made room for information-based Big Data and, more recently, training-based AI. While Big Data and AI are further discussed in Sections 2.2 and 2.3, respectively, it is helpful to first understand model-based analytics. Model-based-analytics, including quantitative and qualitative, have been prevalent since the dawn of mathematical reasoning. Quantitative research is primarily focused on numerical and repeatable data; it avoids subjectivity, which is within the realm of qualitative research. In practice, both model-based approaches are typically employed in any in-depth analysis. In the natural and social sciences, quantitative research focuses on observable phenomena via statistical, mathematical or computational reasoning; the quantitative objective is to develop and validate mathematical models, theories and hypotheses. The process of measurement is, of course, central to modeling since it provides the essential relationships between the various variables.

Analytics are detailed in a range of courses, including design of experiment, test of hypotheses, time value of money, statistics, probability, sampling, regression, queuing, mathematical programming, dynamic programming, time series, etc. Obviously, this very brief section on model-based analytics cannot do justice to such a substantial body of knowledge. Instead, model-based analytics are further compared to Big Data in Section 2.2, and, in turn, AI is compared to Big Data in Section 2.3.

Information-based Big Data

Information-based Big Data is depicted in Fig. 5(b); it includes the sensing step, as well as the processing step (i.e., the step that processes data into information). Big Data, which includes model-based analytics, refers to a data set whose volume is beyond the ability of available analytics to process within a manageable time frame. Indeed, Big Data is not a fixed or immutable term; it is highly dependent on the capacity and ability of the software tools; now, big consists of a few terabytes to several petabytes, while it can very well be in the range of exabyte in the near future. Also, Big Data can be described by several attributes: originally, there were three (i.e., volume, variety, and velocity), later, two more attributes (veracity and value), were included.

Tien (2014) has identified several key differences between the Big Data approach and its model-based counterpart; these are identified in Table 4 in terms of its components: data acquisition, data access, data analytics, and data applications. More importantly, Big Data’s somewhat convenient, but not necessarily valid, approach can result in a corresponding set of potential concerns, as listed in Table 5. Other concerns include amassing data with minimum privacy considerations and processing data in an increasingly unfocused and unproductive manner.

Training-based artificial intelligence

Training-based AI is depicted in Fig. 5(c); it includes the processing, reacting and learning steps. Moreover, AI includes both information-based Big Data and model-based analytics. Thus, AI is a ServGood or service-enveloped machine that attempts to simulate or mimic the cognitive capabilities of the human mind, sometimes regarded as human intelligence. At the core lies an integrated system of digital devices that can perform vital DM functions; it is an AI-based, real-time approach to DM.

AI, a research endeavor championed by the Defense Advanced Research Projects Agency (DARPA), is based on an evolving and growing set of real-time and service-oriented technologies which allow computers to approximate elements of human thought, including observing, understanding, learning, analyzing, optimizing, predicting, customizing, and, in general, making decisions (Castro and New, 2016). As an example, Google’s search routine, trained by neural networks and machine learning, has become much more powerful in regard to aggregating the data, identifying patterns, and making appropriate decisions. Indeed, machine learning has also made considerable improvements in the recognition of voice patterns and the processing of graphic images, as well as in vehicle-assist functions. Deep learning is a critical component of machine learning, modeled similarly in the way layers of synapses and neurons in the brain adapt as they are exposed to changing input.

A range of AI-based ServGoods is listed in Table 6, including web bots that can execute pre-programmed scripts; electromechanical robots that can autonomously perform a complex sequence of actions; electronic assistants that can understand and respond in natural language; medical devices that can monitor, diagnose and decide; and platforms that can employ machine learning and DM tools. Large, established companies like Nvidia, Intel, Baidu, etc. have developed their own proprietary AI platforms, based on advanced GPUs (graphics processing units). Initially, the GPU was designed to create and manage images for games and streaming media players. Smaller enterprises employ readily available AI platforms, including a predictive analytics service (Azure Machine Learning), an enterprise-level smart system (Ayasdi), and an image-recognition and matching system (Pinterest). Not surprising, some AI systems have outperformed their human counterparts, especially in regard to repetitive tasks.

The balance of Section 2.3 addresses two critical issues in greater depth: the AI time line and the machine learning approach. In regard to the AI time line and as summarized in Table 7, Tien (2017) has appropriately segmented AI’s limited history into three, 25-year epochs: the 1950–1975 defining period, the 1975–2000 winter period, and the 2000–today renaissance period. AI, as a concept, was first conceived by the famous English mathematician, logician, and code-breaker, Turing (1950); it was subsequently followed up by a summer workshop at Dartmouth University where McCarthy et al. (1955) coined the “AI” term and a philosophical paper was published by Minsky (1960). However, the defining period came to an abrupt end when Dreyfus (1972) penned “What Computers Can’t Do”; it thoughtfully augured that human intelligence is based primarily on unconscious predispositions instead of conscious mathematical formulations, as initially envisioned by the early AI pioneers. However, except for a brute force backgammon win in 1979 (using a DARPA-funded computer program) and a similar chess win in 1997 (using IBM’s Deep Blue supercomputer), a long winter of minimal progress ensued in the next twenty five years, until a statistical neural network approach to machine learning made its debut in the 21st century, after almost half a century of predictions about its imminent arrival.

Clearly, the neural network approach is more consistent with the way a human brain mobilizes unconscious thoughts to recognize patterns, to identify inconsistencies, and to arrive at conclusions (Mlodinow, 2012). Such a non-algorithmic approach was the impetus for DARPA’s 2004 sponsorship of a driverless car contest across the Mohave Desert; for IBM’s Watson 2011 defeated of two Jeopardy! champions; and for Google’s AlphaGo 2016 victory over Korea’s Go champion. It should be recognized that although both Go (devised in China in 500 BC and competed on a 19-by-19 checker board with black- or white-colored stones assigned to each player) and Chess (devised in India in 455 AD and competed on a 8-by-8 checker board with 16 chess pieces per player) are board games, Go, while seemingly simpler than Chess, is actually more complex. Nevertheless, computers have been able to beat humans at checkers, chess, Othello, and now, with AI’s assistance, Go. In time, sophisticated, but not brute force, AI could evolve; such AI-complete, AI-hard or AGI would control the driverless cars of tomorrow or, more appropriately and according to Tien (2016), by about 2030.

Although artificial neural network (ANN) was the initial basis for AI, the AI term has evolved to also encompass machine learning, machine intelligence, deep learning, visual processing, natural language processing, ranking, predicting, Big Data, real-time analytics, and, indeed, all aspects of real-time DM. More importantly, it is to be noted that most AI applications require some degree of human assist or collaboration, as when AlphaGo technicians employed Monte Carlo simulation to assess policy and value networks in order to reach a 99.8% win rate over other Go programs. Consequently, the European champion Hui Fan was beaten by AlphaGo in October 2015, and then, in February 2016, AlphaGo bested the Korean champion Sedol Lee by 4 games to 1. Interestingly, AlphaGo’s remarkable achievement was an unforeseen AI breakthrough.

In regard to machine learning, Table 8 identifies four alternate approaches: in a neural network, neurons (similar to axons in a biological brain) are interconnected by synapses (McCulloch and Pitts, 1943); in an ANN, neurons learn to carry out a task (e.g., recognizing a unique face) by considering numerous examples that are employed to adjust the weighted layers as learning proceeds (Kleene, 1956); in early machine learning, statistical techniques are used to automate analytical model building (Samuel, 1959); and in deep learning, data representations are acquired through supervised, semi-supervised or unsupervised means (Schmidhuber, 2015). Based on pattern recognition and computational modeling theory, machine learning is employed when it is not feasible to develop explicit algorithms; example applications include filtering spam, detecting network intruders, identifying malicious data manipulators, etc. In an unsupervised mode, machine learning closely resembles data mining; it can yield complex and correlative models and resolution procedures that can reliably uncover hidden insights and produce repeatable decisions.

In general, AI developers are either theoreticians who focus on mathematical formulations or practitioners who focus on abstractions of high-level data with multiple processing layers. An AI network with multiple layers employ a signal path that traverses from front to back. Back propagation employs forward stimulation to reset weights on the “front” neural units; this can be accomplished sometimes by training with a known result. While neural networks can be trained to yield appropriate or “good enough” answers (much like the Big Data approach), the resultant black box responses are not necessarily explainable. Indeed, the resultant steps and findings are not explicitly understood to even the human designer; they are simply the outcomes from a trained neural network. For this reason, the non-transparent results can be a stumbling block for the widespread promulgation and acceptance of AI; that is, if the AI outcome is inexplicable, then, for example, how can a judge or jury pass judgment on the appropriateness of an AI outcome? More research is required to better understand the AI outcomes; they must become more transparent. Such a transparent AI (TAI) must be forthcoming if AI is to become a more viable and credible technology. Additionally, it is critical for the AI training to be unbiased in regard to gender, sex, race, politics, etc.

Obviously, AI, especially TAI or, as further discussed in Section 3.3, AGI, can assist in almost all areas of human endeavor. As listed in Table 9, the global professional services company, Accenture (2017), projects the top 10 economic sectors that could benefit from AI by 2035 to include, by order of impact, manufacturing, professional services, wholesale and retail, information and communication, financial services, construction, healthcare, lodging and food services, utilities, and education. It should also be noted that there are tradeoffs to be made in regard to difficulty versus value in the development of AI applications. Figure 7 suggests that data processing, tax preparation and genomic testing to be low in both difficulty and value; customer targeting, real-time translation and machine learning to be low in value but high in difficulty; credit scoring, natural language processing and leadership skills to be low in difficulty but high in value; and natural language generation, social skills development and autonomous systems to be high in both difficulty and value.

As in the comparison of Big Data to model-based analytics, Table 10 seeks to compare the AI and Big Data approaches, in terms of data acquisition, data access, data analytics, and data application. Although the two approaches are similar in their reliance, respectively, on large data sets and equally large sets of training examples, the table has identified a number of distinctions between the two approaches (Tien, 2017). Like Big Data, AI’s somewhat convenient and not necessarily rational approach can yield a corresponding potential set of concerns, as shown in Table 11. Of course, as in the case of Big Data, AI concerns can be overcome, or at least mitigated, with effective and thoughtful approaches and practices. Nevertheless, as indicated earlier, AI’s hidden layers do require additional research, validation and transparency.

Finally, it should be noted that AI is still evolving and converging with Big Data as it becomes a more powerful and adaptive real-time DM tool; its evolution into a more encompassing and sophisticated AGI machine is further considered in Section 3.3.

Real-time

DM as of the beginning of this 21st Century has already come a long way: building on earlier advances ranging from static to steady-state to dynamic to increasingly more real-time, it has progressed from model-based analytics (since before 1990), to information-based Big Data (since 1990), and to training-based AI (since 2000). In fact, Nvidia recently announced an eighth generation GPU that can simultaneously handle several real-time data streams. Thus, underscored by machine learning and large data sets, real-time DM is becoming both more viable and more powerful.

There are, of course, a range of digital technologies that is near real-time in its DM process, including, as examples, instant-translators, electronic assistants, and ServGoods that can be considered to be virtual, augmented, or mediated reality. A key question is: what is real-time? The third tier in Fig. 4 suggests that real-time corresponds to events that can occur in intervals of time which approach zero; alternatively, real-time must be able to productively deal with the forthcoming era of AVs (Tien, 2016). With the 5G communications advent at 10 Gbps, actions in real-time can be readily attained; thus, when transmitting at 10 Gbps, it only takes 67 ns between each frame, and the downloading of an high-definition (HD) movie will be able to occur in 1 s versus 10 min in today’s 4G environment. In the remainder of this section, three real-time related DM technologies are highlighted: mobile networks, AVs, and AGI.

Mobile networks

Mobility is the byword of the day. As people are in constant motion, they likewise require real-time communication from and to any device or ServGood. Consequently, their mobile devices must be wireless or WiFi-connected so that enhanced scale and coverage can be acquired; moreover, WiFi demand is experiencing exponential growth. (It should be noted that WiFi is a wireless local area networking technology, based on the IEEE 802.11 standards; on the other hand, WiFi is a WiFi Alliance trademark that restricts the term “WiFi certified” to products that satisfy the Alliance’s interoperability certification requirements.) As identified in Table 12, WiFi is currently operative at four frequency locations: 2.4, 3, 5, and 60 GHz. In the future and as smart gadgets proliferate, additional WiFi spectra will be required, together with such powerful new technologies as spectrum sharing and multiplexing.

Unfortunately, existent WiFi routers are subject to several drawbacks; their antennas are precarious; they are subject to frequency interference; and they lack the ability to critically diagnose a technical difficulty. More recently, smarter wireless technologies (e.g., Netgear’s Orbi) are overcoming some of these weaknesses, including with the use of WiMAX (Worldwide Interoperability for Microwave Access), which is a long wavelength-system, covering many kilometers and which spectrum extends much beyond the current WiFi range of between 2.4 and 60 GHz. While WiFi and WiMAX are designed for different purposes, they are actually quite complementary (in that a WiMAX-enabled computer can spawn a local area, WiFi network for interconnecting various devices). Moreover, WiMAX is typically cheaper and faster than most cable modems.

As indicated in Table 13, since 1980, there has already been four generations of WiFi, with each generation spanning about a decade. The generation of 1G (from 1980) was a strictly analog voice version; the generation of 2G (from 1990) provided a digital voice-only service; the generation of 3G (from 2000) provided a digital voice service plus some multimedia and text data; the generation of 4G (from 2010) is a true broadband service, with long term evolution (LTE) and WiMax attributes that are designed for roaming Internet access support via smartphones and other handheld devices; and the generation of 5G is scheduled for commercialization in 2020 with faster speed (i.e., 10 Gbps, versus 1 Gbps for the current 4G system) and greater reliability. In addition, 5G is expected to be underpinned by several new technologies, including millimeter waves which are between 30 to 300 GHz, portable miniature base stations (i.e., small cells), massive MIMO (multiple-input, multiple-output), reduction in MIMO signal interference (i.e., beamforming), and full duplex (i.e., simultaneous transmission of incoming and outgoing signals on the same frequency). Under 5G, the service quality in terms of bandwidth, loss, reliability, latency, and jitter will all be algorithmically tuned to provide the best wireless performance. In brief, 5G will be able to support real-time systems, including drones, simultaneous video feeds, instant augmented reality (AR), virtual multiplayer games, surgery-at-a-distance, and as stated earlier, AVs.

Global positioning system (GPS) constitutes another critical mobile network. Bradford Parkinson is recognized as the chief architect of the ubiquitous GPS, a 1973 effort which he initiated in his capacity as a U.S. Air Force colonel; together with his colleague, James Spilker, he published the fundamental text on GPS theory and applications (Parkinson and Spilker, 1996). More recently, Parkinson was awarded the 2018 Medal of Honor by the IEEE (Institute of Electrical and Electronic Engineers) “for fundamental contributions to and leadership in ... driving the early applications of the Global Positioning System” (Perry, 2018). GPS employs a Doppler radio-effect to ascertain the time and geolocation information to anywhere on or near the Earth, as long as there is an unobstructed line of sight to at least four GPS satellites. Unfortunately, inasmuch as GPS signals are relatively weak, they can be easily obstructed by mountains or other obstacles like buildings. Although GPS became fully operational, with 24 satellites, in 1993, it was only open for civilian use after the Korean Airlines disaster in 1983, initially in a degraded form until the year 2000. New GPS receivers using an inaugural navigation band are being released in 2018 with much higher precision, to within 30 cm or just under one foot. In order to counter the U.S. domination of the GPS landscape, several countries are establishing similar systems, including China’s BeiDou Navigation Satellite System, Russia’s GLONASS (Global Navigation Satellite System), European Union’s Galileo positioning system, India’s NavIC (navigation constellation), and Japan’s Quasi-Zenith Satellite System.

The GPS technology relies on both time and the known position of dedicated satellites that have stable atomic clocks which are synchronized with ground clocks and with one another. Any drift from true time is corrected daily, and the satellite locations are known with great exactitude. GPS receivers, including those in most cell phones and vehicle tracking devices (e.g., Garmin and Google Maps), have clocks as well, but they are less stable and less exact. On the other hand, GPS satellites continuously transmit data about their current time and location. Although four satellites are required for normal operation, fewer can be employed in certain circumstances; if, for example, one variable is already known (e.g., by reusing, say, the last known altitude dimension), a receiver can determine its position by triangulating among only three satellites.

Obviously, in any real-time decision venue, some form of time-space device must be employed, as in the case of driverless cars. Table 14 identifies a number of such decision-oriented applications, including cellular telephony (for geotagging and locating emergency vehicles), celestial astronomy (for extrasolar planet discovery and location of atmospheric conditions), robotic navigation (for real-time location and identification of AVs), and site-specific management (for use in modern agriculture). Additionally, with the astounding proliferation in IoT devices or ServGoods, more Internet traffic is expected; in order to process this growth, a robust mobile network will be required that can transmit information and data with minimum interruptions or delays. Clearly, both data transmission capacity (i.e., bandwidth) and data delay (i.e., latency) are critical to the IoT performance. As a consequence, the updated Internet-in-the-cloud (with 5G and an enhanced GPS) is a necessary attribute for the functioning of AVs. A more recent XY system can be considered to be a local GPS with smartphone tracking of one’s pet, luggage, car, bicycle, etc.

Autonomous vehicles

Autonomous entities were first featured in science fiction as beings that appear to be sentient and autonomous in their behavior, thus considered to be inhabitants of a perceived alternate reality. However, in the context of this paper, autonomous entities refer to ServGoods that are self-controlled and not directed by outside forces. Table 15 identifies several autonomous ServGoods, including those pertaining to space (e.g., probe that can explore the surface of a planet and collect samples), architecture (e.g., windows that can adapt to changing light, heat), agriculture (e.g., drone that can control weeds, seed clouds, plant seeds, analyze soil), infrastructure (e.g., dam that can adapt or maintain reservoir level and prevent flooding) and delivery (e.g., driverless vehicle or drone that can deliver packages to a specific address).

In regard to AVs, the NHTSA (National Highway Traffic Safety Administration, 2013) identified five possible degrees or levels of automation (i.e., no automation, function-specific automation, combined function automation, limited self-driving automation, and full self-driving automation), although a couple of additional levels are being added to restrict some of the driverless vehicles from accessing certain locations unless they are equipped with additional capabilities. On the other hand and as indicated in Table 16, the five levels could likewise be stated in terms of the driver’s level of engagement, ranging from being fully engaged (pre-2000), to mostly engaged (from 2000 to 2010), to partially engaged (from 2010 to 2020), to mostly disengaged (from 2020 to 2030), and to fully disengaged (post-2030).

In fact, Tien (2016) has identified AVs to be the Sputnik of ServGoods; it has served to galvanize the U.S. (including its government, auto industry, electronic industry, and higher education establishments) into an age of real-time automata. Thus, precipitating a major quality-of-life disruption, together with policy implications regarding privacy and security, liability and insurance, and regulations and standards. In sum, AVs will be capable of recognizing its environs, interacting with other ServGoods, and circumnavigating barriers, all the while entertaining its occupants. As AVs proliferate, they will be recognized as just another service and not a product or good to be leased or owned; actually, they will be considered to be a ServGood to be co-developed and shared. Consequently, at this point in time, ServGood companies are perhaps better positioned to develop the required services-endowed, AVs.

AVs were first conceived in a General Motors video that debuted at the 1939 New York World’s Fair. Much later and as alluded to earlier, DARPA (Defense Advanced Research Projects Administration) announced a one-million-dollar AV race across the Mojave Desert in 2005, followed by an AV Urban Challenge in 2007 that was located in a mock city. Since the turn of the century, several auto businesses, electronic enterprises, research universities, and government agencies have, to a large extent, partnered with each other to bring to fruition the era of driverless cars. Indeed, Table 17 identifies five car-assist mechanisms that are already available in today’s market place: parking assist, active cruise control, blind spot assist, forward collision prevention, and lane drift warning. As summarized in Table 18, innovations in every engineering discipline, including biomedical, chemical, computer science, communications, electrical, energy, environmental, industrial, materials, and mechanical engineering, have already contributed to AV advancement. Interestingly, all the identified disciplines can be considered to be a part of the broader domain in systems, man and cybernetics (Tien, 2017).

First offered on its 2015 Model S vehicle, Tesla is making available an autopilot version with a number of ultrasonic sensors, a front radar, a camera, and electronically synchronized brakes. However, within a few months of the 2015 Model S introduction with its Levels 2 and 3 rated automation, two crashes occurred: the May 7, 2016, crash resulted in a Tesla driver’s death (blinded by sunlight, neither the human driver nor the electronic autopilot noticed the white surface of a tractor trailer), and the July 1, 2016, guardrail collision that turned the Tesla upside down (fortunately, with no sustained injury). More recently, on March 19, 2018, a pedestrian was fatally struck by an autonomous Uber vehicle in Arizona, and on May 10, 2018, two individuals died when their 2014 Tesla Model S struck a wall and ignited on fire. Perhaps no AV should be introduced on the roadways unless it has attained a Level 4 designation; integrating a human driver with a partially limited AV system is not only dangerous but could put the human at a disadvantage, as the driver may well be lulled into a state of complacency, instead of being in a state of constant alertness while actively driving the car. In fact, unlike Tesla, several AV pioneers (e.g., Alphabet Google, GM) have decided not to introduce their AVs until a Level 4 automation has been fully achieved.

Obviously, the introduction of Level 4 AVs will not only disrupt the automobile industry but also the broader transportation industry, as well as related industries (e.g., insurance, housing, education). Anderson et al. (2014) predict that AVs will pervade the mining and farming industries by 2020 and the taxicab and car-sharing industry by 2030. Moreover, in a world of connected AVs that can communicate directly with each other, there will be no need for traffic lights. However, there will, unfortunately, be potential breaches of privacy and security; not only are the vehicles themselves vulnerable to being hacked but so are their connected ServGoods. Indeed, the AV industry must consider cyber vulnerabilities as a critical matter. Beyond the obvious issues of privacy and security, there are other critical worries, including cultural and trust concerns, government regulations and engineering standards, and risk and liability. Of course, forthcoming new technologies could also impact the promulgation and proliferation of AVs. For example, a recent technology that may have a profound impact is LIDAR (light detection and ranging), a detection system that functions like radar but employs laser light to measure distance between targets of interest.

In addition to the technical problems that must still be overcome, there are obviously a number of issues or policies to be resolved before AVs are generally allowed on the U.S. roadways, including the aforementioned trolley dilemmas and the difficulties of having AVs co-exist with non-AVs. Although it may be obvious that in the case of, say, a brake failure, AVs should spare the lives of many over the few, to spare the humans over the animals, and to spare the young over the old, most AV quandaries will be quite nuanced and additional training-based data are required to appropriately program AVs in real-time.

Finally, a word of caution should be made about AI-inspired AVs. Can lawyers, policymakers and ethicists hold AVs or their operators accountable as they do now? As noted earlier, AI recommendations are sometimes not transparent or obvious; if so, how can one assign fault and damages in a legal proceeding? Perhaps the AGI will in time overcome this predicament.

Artificial general intelligence

Several observations about AGI are noted in Table 19. The first observation concerns today’s narrow or weak AI systems; although they can efficiently carry out internet searches, image recognitions, language translations, etc., they cannot perform these individual tasks in a parallel or integrated manner. Together with Big Data, the goal of AI is, of course, to recognize and resolve problems or issues in much the same way that the human brain does, by utilizing the combined power of several million connected neurons; obviously, current ANNs are much less complex than the human brain. Certainly, AGI seeks to span neural connections much further, link many more processing layers, and not be limited to just neighboring neurons. However, training these neural network layers will require several thousand cycles of learning, especially if AI were to address related issues like social networking (Hendler and Mulvehill, 2016).

In theory, AGI can become an existential threat to humanity. Indeed, AGI, under the control of a technologically advanced and authoritarian government, could render an already-acknowledged police state into a terrifying all-knowing one in which all residents are under constant digital surveillance, much like in the dystopian “1984” fictionalized state (Orwell, 1949), and where human actions are constantly being monitored by the always-present Big Brother. Western democracies are also succumbing to the use of privately obtained data for inappropriate purposes, as in the case of the recent Facebook scandal (whereby the now defunct Cambridge Analytica illegally accessed 87 million records for nefarious reasons by operatives involved in the 2016 Trump campaign). Clearly, such inappropriate accesses will invariably prompt governments to become more involved in regulating the collection of and access to privately-obtained data.

Moreover, the improper accumulation and misuse of private data (including the 2017 Equifax breach, the 2011 Health Net hack, the 2007 Google street view, the 2006 America Online search leak, and the 2005 Sony spyware incidents) have resulted in a growing backlash against Big Data. Thus, in order to mitigate such invasions of privacy, on May 25, 2018, the European Union’s General Data Protection Regulation extended the data privacy and security rights of all European residents by giving them the power to better control their own data. More interestingly, since samples of, say, Big Data or AI can be regarded as valuable assets, shouldn’t they be appropriately valued and bought and sold like all assets? Likewise, shouldn’t social media companies obtain permission from and pay their customers for the use of the collected data?

The overarching goal of AGI, if not AI, is, of course, to react in an integrated and optimal manner. However, even under fixed or deterministic circumstances, making an optimal decision is not obvious. Furthermore, in an uncertain, dynamic or real-time setting, such optimal DM would be even more difficult, if not impossible. Note that an AGI-based machine may only become a human substitute if the underlying actions can approximate human intelligence. Such a device could recursively improve on itself, and eventually achieve superintelligence. Moreover, one or more superintelligences could result in a technological singularity. However, it is doubtful that such an AGI or superintelligence, with consciousness, can become an actuality in the near future (Tegmark, 2018). Nevertheless, if narrow AI addresses a particular problem by employing different parts of the brain, then AGI can be thought of as reflecting the whole brain, able to concurrently solve various problems that were not assumed in the initial design phase. The whole brain architecture (most likely, based on an integration of ANNs and deep machine learning, all within a brain’s architectural framework (containing some 14 billion neurons, a temporal lobe, a visual cortex, a neocortex, etc.)) would exhibit the same behavior and sense of values as a human. Clearly, as long as humans remain smarter than machines, the former should have nothing to fear from the latter.

Interestingly, even without attaining an AGI status, AI has already been condemned for a number of imagined and current ruthless actions, which Atkinson (2016) discredits as myths, including the beliefs that AI can undermine most jobs, render humans stupid, invade people’s privacy, encourage abuse and bias, and eventually obliterate humanity. Another fear concerns AI’s eventual displacement of humans; for example, it is well known that AI outperforms humans in a number of visually repetitive tasks, including facial recognitions and X-ray readings. On the other hand, retarding AI development could be even more costly, leading to lower growth in per-capita incomes, slower progress of critical technologies like driverless cars and healthcare imaging, and, in general, fewer advances in services, goods and ServGoods. More importantly, by partnering with AI, governments could gainfully address safety, security, and public policy (including facilitating the identification of forgeries and fake news); healthcare professionals could employ AI in diagnosing diseases and developing effective treatments; and homes could become more energy-efficient by embedding AI devices in their home appliances. Indeed, AI-human partnering can occur through augmentation (e.g., by providing additional information), collaboration (e.g., by offering a second opinion), and hyperlearning (e.g., by developing a digital assistant that could suggest alternative solutions). However, as noted earlier, AI may also inadvertently introduce a bias or jump to a wrong conclusion, one that could be learning-induced; if not appropriately validated, such biases could result in serious consequences, including morbidity, mortality and bankruptcy.

Clearly, caution must be exercised and all AI-derived actions must be carefully assessed, especially if they are not obviously transparent. As indicated earlier, AI, or any electromechanical outcome, works best when trained and validated by humans. This caution is best illustrated by the actions of Stanislav Petrov, a lieutenant colonel in the Soviet Air Defense Forces. When a technical glitch occurred in 1983 that recommended a retaliatory missile strike, Petrov, after carefully assessing the tense Cold War situation, chose to countermand the attack. Following his death in 2017, a biographer credited him with saving the world from certain disaster. As it turned out, the mistake was due to a computer’s failure to distinguish between the sun’s reflections off clouds and the light signatures akin to a missile attack. Clearly, retaining a human in the critical decision loop may have saved humanity. More precisely, the ultimate aim of AI or AGI should be to create machines that can truly assist and partner with people; that is why vast data sets are required to appropriately train the machines.

In regard to minimizing unintended consequences, AI developers, realizing their technology’s awesome power, are becoming more cautious in their AI pursuits. For example, on June 7, 2018, Alphabet Inc.’s Google, in response to employee backlash against weaponing AI, stated that it would not renew its AI work with the U.S. Air Force which, if successful, could have resulted in a fully autonomous weapon system that could hunt for targets and launch missiles without human supervision. Moreover, Google subsequently established a set of principles to guide its future AI decisions. Then, on October 12, 2018, Google employed these principles to justify taking a pass on the Pentagon’s 10 billion USD cloud-computing contract, entitled JEDI (Joint Enterprise Defense Infrastructure). Obviously, as AI becomes more AGI-like, such principles must be reconsidered and upgraded.

On the other hand, AI’s beneficial outcomes, including detecting fraud, composing art, optimizing logistics, conducting research, and providing instant translations, could certainly make the world a better place. In fact, technology giants like Alphabet/Google, Amazon, Alibaba, IBM and Microsoft believe that now is the right time to talk about the nearly boundless frontier of AI; in many ways, it is likewise a new frontier about ethics and risk assessment as it is about an awesome emerging technology, which may someday be concurrently controlling an individual’s vehicle, smartphone, computer, pacemaker, and stock trades. Indeed, Gholami et al. (2018) describe how a future AI or AGI can coordinate the innumerable number of sensors in an intensive care unit and appropriately manage any required machine-human interactions.

Finally, the increased national attention on AI is perhaps best reflected in the recent establishment of a new College of Computing at the Massachusetts Institute of Technology (MIT), funded with a 350 million USD gift which will nearly double MIT’s academic capabilities in AI through the addition of some 50 new faculty positions.

Related observations

Clearly, real-time DM constitutes a major disruptive technology that is impacting, if not transforming, a range of industries in regard to their services, goods, and ServGoods. Some twelve related observations, which subsume the contents in Sections 1 through 3, and likewise reference several earlier efforts by the author, are summarized herein:

1) IoT/ServGoods are able to obtain Big Data correlations, not causations;

2) 5G wireless will greatly enhance the development of real-time technologies, including AVs;

3) Real-time decision making (RTDM) based on decision informatics is central to ServGood actions;

4) GPUs are displacing CPUs, a bonus for AI;

5) AI is critical for the Third Industrial Revolution (TIR), which subsumes mass production within mass customization;

6) IoT, RTDM and AI constitute a disruptive innovation, collectively and individually;

7) Collectively and individually, IoT, RTDM and AI are vulnerable to being hacked;

8) GPS is very vulnerable to being hacked; out of 16 “nation critical” sectors, 14 are GPS-dependent;

9) In addition to AI, other forms of intelligence can likewise be operationalized over time;

10) AI is an ever-evolving technology, from AI to AGI;

11) In regard to monetizing assets, care must be taken to not embracing profit at the expense of privacy;

12) DM has gone from model-based analytics (before 1990), to Big Data (since 1990), to AI (since 2000).

First, given that vast amounts of data are being acquired by the IoT or ServGoods, Big Data methods remain in vogue (Tien, 2014); nonetheless, it should be recognized that Big Data methods are, at best, correlational analyses which must still be statistically explored with traditional design of experiment and hypothesis testing procedures to determine underlying causal relationships, if any.

Second, 5G wireless is a necessary foundation for real-time technologies (Tien, 2017), including AVs and other WiFi ServGoods. Similar to the Soviet Union’s 1957 launch of Sputnik, the U.S. focus on driverless vehicles, expected in 2030, is being carried out with the full and coordinated support of the auto industry, the electronics industry, the institutions of higher education, and the Federal and State governments.

Third, real-time DM based on the decision informatics paradigm (Tien, 2003) is at the center of critical innovations in healthcare, transportation, manufacturing, navigation, etc., including solutions for resolving “trolley” dilemmas, for control of risks in unsupervised learning, and for creating machine intelligence. Indeed, real-time DM is the basis for the convergence of IoT, Big Data and AI.

Fourth, the GPU with up to 5000 cores is displacing the CPU with up to 32 cores, inasmuch as the former is better suited to high performance computing or massive parallel processing, a bonus for AI learning.

Fifth, AI is a critical component of what Tien (2012) refers to as the TIR; in particular, AI is able to a) combine goods and services into “ServGoods”, b) converge adaptive services, digital manufacturing, mass customization, Big Data analytics and other TIR technologies, c) minimize the need for outsourcing and offshoring, and d) subsume mass production under an efficient mass customization framework. Also, while AI is clearly a boon to drug discovery and driverless cars, it can also be an ethical threat to, say, gene splicing and automated misinformation.

Sixth, IoT, RTDM, and AI can be collectively and individually regarded as being possibly a disruptive innovation. Thus, as noted by Tien (2017), while the AV reflects the convergence of several critical technologies, it may well make individual car ownership an anomaly and, instead, make car sharing a new reality that will permanently change the transportation landscape.

Seventh, in an integrated IoT, RTDM and AI world, there are ample opportunities for hacking (Tien, 2017). Thus, whether intended or unintended, hacking an AV could become a springboard to embedding malicious software in all the AV-connected devices.

Eighth, the GPS is also quite vulnerable to being hacked, especially if a rogue nation wished to attack another nation. In the U.S., it should be noted that out of 16 “nation-critical” assets, 14 are GPS-dependent.

Ninth, while digital methods have helped to operationalize AI, other forms of intelligence (i.e., nature smart, musical smart, reasoning smart, life smart, collaboration smart, empathy smart, emotional smart, interpersonal smart, intrapersonal smart, body smart, spatial smart, etc.) can likewise be artificially operationalized and contribute to enhancing an AI-based DM process.

Tenth, AI is an ever-evolving technology; thus, current researchers in learning and deep neural networks are already extending AI to AGI through hyper learning (i.e., unsupervised and reinforcement learning) that do not require human assistance.

Eleventh, while there is a strong incentive to monetize assets (e.g., using Facebook’s vast database), care must be taken to not over-embrace profit at the expense of under-embracing privacy.

Twelfth, the major DM approaches of model-based analytics (since before 1990), information-based Big Data (since 1990), and training-based AI (since 2000) may well be integrated over time and eventually be subsumed by AGI with enhanced scope, scale and speed.

References

[1]

Accenture (2017). Impact of Artificial Intelligence on Industry Growth by 2035. Report

[2]

Anderson J M, Kalra N, Stanley K D, Sorenson P, Samaras C, Oluwatola O A (2014). Autonomous Vehicle Technology: a Guide for Policymakers. Santa Monica: The RAND Corporation

[3]

Samuel A L (1959). Some studies in machine learning using the game of checkers. IBM Journal of Research and Development, 3(3): 210–229

[4]

Atkinson R D (2016). ‘It’s Going to Kill Us!’ and Other Myths About the Future of AI. Information Technology & Innovation Foundation

[5]

Azimov I (1950). I, Robot. New York: Gnome Press

[6]

Castro D, New J (2016). The Promise of Artificial Intelligence. Washington DC/Brussels: Center for Data Innovation

[7]

Dreyfus H (1972). What Computers Can’t Do. New York: MIT Press

[8]

Gholami B, Haddad W M, Bailey J M (2018). AI in the ICU: in the intensive care unit, artificial intelligence can keep watch. IEEE Spectrum, 55(10): 31–35

[9]

Hendler J, Mulvehill A M (2016). Social Machines: the Coming Collision of Artificial Intelligence, Social Networking, and Humanity. New York: Apress

[10]

House of Lords Select Committee on Artificial Intelligence (2018). Five Proposed Principles for an AI Code. House of Lords of the United Kingdom Report

[11]

Kleene S C (1956). Representation of events in nerve nets and finite automata. In: Shannon C E, McCarthy J, eds. Automata Studies. Princeton: Princeton University Press, 3–41

[12]

McCarthy J, Minsky M L, Rochester N, Shannon C E (1955). A proposal for the Dartmouth research project on artificial intelligence. Republished in 2006. AI Magazine, 27(4): 11–14

[13]

McCulloch W S, Pitts W (1943). A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics, 5(4): 115–133

[14]

Minsky M (1961). Steps toward artificial intelligence. Proceedings of the IRE, 49(1): 8–30

[15]

MIT (2017). 50 smartest companies in 2017. MIT Technology Review, 120(4): 54–57

[16]

Mlodinow L (2012). Subliminal: How Your Unconscious Mind Rules Your Behavior. New York: Pantheon Books

[17]

National Highway Traffic Safety Administration (2013). U.S. Department of Transportation Releases Policy on Automated Vehicle Development. Washington, DC: NHTSA

[18]

Orwell G (1949). 1984. London: Secker and Warburg

[19]

Parkinson B W, Spilker J J (1996). Global Positioning System: Theory and Applications. Reston: American Institute of Aeronautics and Astronautics

[20]

Perry T S (2018). GPS’ navigator in chief. IEEE Spectrum, 55(5): 46–51

[21]

Ross P E (2015). Diabetes has a new enemy: robo-pancreas. IEEE Spectrum, 52(6): 40–44

[22]

Schmidhuber J (2015). Deep learning in neural networks: an overview. Neural Networks, 61: 85–117

[23]

Tegmark M (2018). Life 3.0: Being Human in the Age of Artificial Intelligence. New York: Knopf Doubleday Publishing Group

[24]

Tien J M (2003). Toward a decision informatics paradigm: a real-time information based approach to decision making. IEEE Transactions on Systems, Man and Cybernetics. Part C, Applications and Reviews, 33(1): 102–113

[25]

Tien J M (2012). The next industrial revolution: integrated services and goods. Journal of Systems Science and Systems Engineering, 21(3): 257–296

[26]

Tien J M (2013). Big Data: unleashing information. Journal of Systems Science and Systems Engineering, 22(2): 127–151

[27]

Tien J M (2014). Overview of big data: a US perspective. Bridge, 44(4): 12–19

[28]

Tien J M (2015). Internet of connected ServGoods: considerations, consequences and concerns. Journal of Systems Science and Systems Engineering, 24(2): 130–167

[29]

Tien J M (2016). The sputnik of ServGoods: autonomous vehicles. Journal of Systems Engineering, 26(2): 10–38

[30]

Tien J M (2017). Internet of things, real-time decision making, and artificial intelligence. Annals of Data Science, 4(2): 149–178

[31]

Turing A (1950). Computing machinery and intelligence. Mind, LIX(236): 433–460

RIGHTS & PERMISSIONS

Higher Education Press

AI Summary AI Mindmap
PDF (571KB)

4156

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/