Our next generation of industry—Industry 4.0—holds the promise of increased flexibility in manufacturing, along with mass customization, better quality, and improved productivity. It thus enables companies to cope with the challenges of producing increasingly individualized products with a short lead-time to market and higher quality. Intelligent manufacturing plays an important role in Industry 4.0. Typical resources are converted into intelligent objects so that they are able to sense, act, and behave within a smart environment. In order to fully understand intelligent manufacturing in the context of Industry 4.0, this paper provides a comprehensive review of associated topics such as intelligent manufacturing, Internet of Things (IoT)-enabled manufacturing, and cloud manufacturing. Similarities and differences in these topics are highlighted based on our analysis. We also review key technologies such as the IoT, cyber-physical systems (CPSs), cloud computing, big data analytics (BDA), and information and communications technology (ICT) that are used to enable intelligent manufacturing. Next, we describe worldwide movements in intelligent manufacturing, including governmental strategic plans from different countries and strategic plans from major international companies in the European Union, United States, Japan, and China. Finally, we present current challenges and future research directions. The concepts discussed in this paper will spark new ideas in the effort to realize the much-anticipated Fourth Industrial Revolution.
Additive manufacturing (AM) technology has been researched and developed for more than 20 years. Rather than removing materials, AM processes make three-dimensional parts directly from CAD models by adding materials layer by layer, offering the beneficial ability to build parts with geometric and material complexities that could not be produced by subtractive manufacturing processes. Through intensive research over the past two decades, significant progress has been made in the development and commercialization of new and innovative AM processes, as well as numerous practical applications in aerospace, automotive, biomedical, energy and other fields. This paper reviews the main processes, materials and applications of the current AM technology and presents future research needs for this technology.
Trillions of microbes have evolved with and continue to live on and within human beings. A variety of environmental factors can affect intestinal microbial imbalance, which has a close relationship with human health and disease. Here, we focus on the interactions between the human microbiota and the host in order to provide an overview of the microbial role in basic biological processes and in the development and progression of major human diseases such as infectious diseases, liver diseases, gastrointestinal cancers, metabolic diseases, respiratory diseases, mental or psychological diseases, and autoimmune diseases. We also review important advances in techniques associated with microbial research, such as DNA sequencing, metabonomics, and proteomics combined with computation-based bioinformatics. Current research on the human microbiota has become much more sophisticated and more comprehensive. Therefore, we propose that research should focus on the host-microbe interaction and on cause-effect mechanisms, which could pave the way to an understanding of the role of gut microbiota in health and disease. and provide new therapeutic targets and treatment approaches in clinical practice.
An outbreak of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection and its caused coronavirus disease 2019 (COVID-19) have been reported in China since December 2019. More than 16% of patients developed acute respiratory distress syndrome, and the fatality ratio was about 1%–2%. No specific treatment has been reported. Herein, we examined the effects of Favipiravir (FPV) versus Lopinavir (LPV)/ritonavir (RTV) for the treatment of COVID-19. Patients with laboratory-confirmed COVID-19 who received oral FPV (Day 1: 1600 mg twice daily; Days 2–14: 600 mg twice daily) plus interferon (IFN)-α by aerosol inhalation (5 million U twice daily) were included in the FPV arm of this study, whereas patients who were treated with LPV/RTV (Days 1–14: 400 mg/100 mg twice daily) plus IFN-α by aerosol inhalation (5 million U twice daily) were included in the control arm. Changes in chest computed tomography (CT), viral clearance, and drug safety were compared between the two groups. For the 35 patients enrolled in the FPV arm and the 45 patients in the control arm, all baseline characteristics were comparable between the two arms. A shorter viral clearance time was found for the FPV arm versus the control arm (median (interquartile range, IQR), 4 (2.5–9) d versus 11 (8–13) d, P < 0.001). The FPV arm also showed significant improvement in chest imaging compared with the control arm, with an improvement rate of 91.43% versus 62.22% (P = 0.004). After adjustment for potential confounders, the FPV arm also showed a significantly higher improvement rate in chest imaging. Multivariable Cox regression showed that FPV was independently associated with faster viral clearance. In addition, fewer adverse events were found in the FPV arm than in the control arm. In this open-label before-after controlled study, FPV showed better therapeutic responses on COVID-19 in terms of disease progression and viral clearance. These preliminary clinical results provide useful information of treatments for SARS-CoV-2 infection.
State-of-the-art technologies such as the Internet of Things (IoT), cloud computing (CC), big data analytics (BDA), and artificial intelligence (AI) have greatly stimulated the development of smart manufacturing. An important prerequisite for smart manufacturing is cyber–physical integration, which is increasingly being embraced by manufacturers. As the preferred means of such integration, cyber–physical systems (CPS) and digital twins (DTs) have gained extensive attention from researchers and practitioners in industry. With feedback loops in which physical processes affect cyber parts and vice versa, CPS and DTs can endow manufacturing systems with greater efficiency, resilience, and intelligence. CPS and DTs share the same essential concepts of an intensive cyber–physical connection, real-time interaction, organization integration, and in-depth collaboration. However, CPS and DTs are not identical from many perspectives, including their origin, development, engineering practices, cyber–physical mapping, and core elements. In order to highlight the differences and correlation between them, this paper reviews and analyzes CPS and DTs from multiple perspectives.
Information and communication technology is undergoing rapid development, and many disruptive technologies, such as cloud computing, Internet of Things, big data, and artificial intelligence, have emerged. These technologies are permeating the manufacturing industry and enable the fusion of physical and virtual worlds through cyber-physical systems (CPS), which mark the advent of the fourth stage of industrial production (i.e., Industry 4.0). The widespread application of CPS in manufacturing environments renders manufacturing systems increasingly smart. To advance research on the implementation of Industry 4.0, this study examines smart manufacturing systems for Industry 4.0. First, a conceptual framework of smart manufacturing systems for Industry 4.0 is presented. Second, demonstrative scenarios that pertain to smart design, smart machining, smart control, smart monitoring, and smart scheduling, are presented. Key technologies and their possible applications to Industry 4.0 smart manufacturing systems are reviewed based on these demonstrative scenarios. Finally, challenges and future perspectives are identified and discussed.
The current irrational use of fossil fuels and the impact of greenhouse gases on the environment are driving research into renewable energy production from organic resources and waste. The global energy demand is high, and most of this energy is produced from fossil resources. Recent studies report that anaerobic digestion (AD) is an efficient alternative technology that combines biofuel production with sustainable waste management, and various technological trends exist in the biogas industry that enhance the production and quality of biogas. Further investments in AD are expected to meet with increasing success due to the low cost of available feedstocks and the wide range of uses for biogas (i.e., for heating, electricity, and fuel). Biogas production is growing in the European energy market and offers an economical alternative for bioenergy production. The objective of this work is to provide an overview of biogas production from lignocellulosic waste, thus providing information toward crucial issues in the biogas economy.
Pseudomonas aeruginosa causes severe and persistent infections in immune compromised individuals and cystic fibrosis sufferers. The infection is hard to eradicate as P. aeruginosa has developed strong resistance to most conventional antibiotics. The problem is further compounded by the ability of the pathogen to form biofilm matrix, which provides bacterial cells a protected environment withstanding various stresses including antibiotics. Quorum sensing (QS), a cell density-based intercellular communication system, which plays a key role in regulation of the bacterial virulence and biofilm formation, could be a promising target for developing new strategies against P. aeruginosa infection. The QS network of P. aeruginosa is organized in a multi-layered hierarchy consisting of at least four interconnected signaling mechanisms. Evidence is accumulating that the QS regulatory network not only responds to bacterial population changes but also could react to environmental stress cues. This plasticity should be taken into consideration during exploration and development of anti-QS therapeutics.
Advances in high-throughput sequencing (HTS) have fostered rapid developments in the field of microbiome research, and massive microbiome datasets are now being generated. However, the diversity of software tools and the complexity of analysis pipelines make it difficult to access this field. Here, we systematically summarize the advantages and limitations of microbiome methods. Then, we recommend specific pipelines for amplicon and metagenomic analyses, and describe commonly-used software and databases, to help researchers select the appropriate tools. Furthermore, we introduce statistical and visualization methods suitable for microbiome analysis, including alpha- and betadiversity, taxonomic composition, difference comparisons, correlation, networks, machine learning, evolution, source tracing, and common visualization styles to help researchers make informed choices. Finally, a stepby-step reproducible analysis guide is introduced. We hope this review will allow researchers to carry out data analysis more effectively and to quickly select the appropriate tools in order to efficiently mine the biological significance behind the data.
This paper reviews recent studies in understanding neural-network representations and learning neural networks with interpretable/disentangled middle-layer representations. Although deep neural networks have exhibited superior performance in various tasks, interpretability is always Achilles’ heel of deep neural networks. At present, deep neural networks obtain high discrimination power at the cost of a low interpretability of their black-box representations. We believe that high model interpretability may help people break several bottlenecks of deep learning, e.g., learning from a few annotations, learning via human–computer communications at the semantic level, and semantically debugging network representations. We focus on convolutional neural networks (CNNs), and revisit the visualization of CNN representations, methods of diagnosing representations of pre-trained CNNs, approaches for disentangling pre-trained CNN representations, learning of CNNs with disentangled representations, and middle-to-end learning based on model interpretability. Finally, we discuss prospective trends in explainable artificial intelligence.
Safe, efficient, and sustainable operations and control are primary objectives in industrial manufacturing processes. State-of-the-art technologies heavily rely on human intervention, thereby showing apparent limitations in practice. The burgeoning era of big data is influencing the process industries tremendously, providing unprecedented opportunities to achieve smart manufacturing. This kind of manufacturing requires machines to not only be capable of relieving humans from intensive physical work, but also be effective in taking on intellectual labor and even producing innovations on their own. To attain this goal, data analytics and machine learning are indispensable. In this paper, we review recent advances in data analytics and machine learning applied to the monitoring, control, and optimization of industrial processes, paying particular attention to the interpretability and functionality of machine learning models. By analyzing the gap between practical requirements and the current research status, promising future research directions are identified.
Donor shortages for organ transplantations are a major clinical challenge worldwide. Potential risks that are inevitably encountered with traditional methods include complications, secondary injuries, and limited source donors. Three-dimensional (3D) printing technology holds the potential to solve these limitations; it can be used to rapidly manufacture personalized tissue engineering scaffolds, repair tissue defects in situ with cells, and even directly print tissue and organs. Such printed implants and organs not only perfectly match the patient’s damaged tissue, but can also have engineered material microstructures and cell arrangements to promote cell growth and differentiation. Thus, such implants allow the desired tissue repair to be achieved, and could eventually solve the donor-shortage problem. This review summarizes relevant studies and recent progress on four levels, introduces different types of biomedical materials, and discusses existing problems and development issues with 3D printing that are related to materials and to the construction of extracellular matrix in vitro for medical applications.
A growing number of?three-dimensional (3D)-print-ing processes have been applied to tissue engineering. This paper presents a state-of-the-art study of 3D-printing technologies?for tissue-engineering applications, with particular focus on the development of a computer-aided scaffold design system; the direct 3D printing of functionally graded scaffolds; the modeling of selective laser sintering (SLS) and fused deposition modeling (FDM) processes; the indirect additive manufacturing of scaffolds, with both micro and macro features; the development of a bioreactor; and 3D/4D bioprinting. Technological limitations will be discussed so as to highlight the possibility of future improvements for new 3D-printing methodologies for tissue engineering.
Background: In recent years, since the molecular docking technique can greatly improve the efficiency and reduce the research cost, it has become a key tool in computer-assisted drug design to predict the binding affinity and analyze the interactive mode.
Results: This study introduces the key principles, procedures and the widely-used applications for molecular docking. Also, it compares the commonly used docking applications and recommends which research areas are suitable for them. Lastly, it briefly reviews the latest progress in molecular docking such as the integrated method and deep learning.
Conclusion: Limited to the incomplete molecular structure and the shortcomings of the scoring function, current docking applications are not accurate enough to predict the binding affinity. However, we could improve the current molecular docking technique by integrating the big biological data into scoring function.
Obesity increases the risk for type 2 diabetes through induction of insulin resistance. Treatment of type 2 diabetes has been limited by little translational knowledge of insulin resistance although there have been several well-documented hypotheses for insulin resistance. In those hypotheses, inflammation, mitochondrial dysfunction, hyperinsulinemia and lipotoxicity have been the major concepts and have received a lot of attention. Oxidative stress, endoplasmic reticulum (ER) stress, genetic background, aging, fatty liver, hypoxia and lipodystrophy are active subjects in the study of these concepts. However, none of those concepts or views has led to an effective therapy for type 2 diabetes. The reason is that, there has been no consensus for a unifying mechanism of insulin resistance. In this review article, literature is critically analyzed and reinterpreted for a new energy-based concept of insulin resistance, in which insulin resistance is a result of energy surplus in cells. The energy surplus signal is mediated by ATP and sensed by adenosine monophosphate-activated protein kinase (AMPK) signaling pathway. Decreasing ATP level by suppression of production or stimulation of utilization is a promising approach in the treatment of insulin resistance. In support, many of existing insulin sensitizing medicines inhibit ATP production in mitochondria. The effective therapies such as weight loss, exercise, and caloric restriction all reduce ATP in insulin sensitive cells. This new concept provides a unifying cellular and molecular mechanism of insulin resistance in obesity, which may apply to insulin resistance in aging and lipodystrophy.
Despite significant successes achieved in knowledge discovery, traditional machine learning methods may fail to obtain satisfactory performances when dealing with complex data, such as imbalanced, high-dimensional, noisy data, etc. The reason behind is that it is difficult for these methods to capture multiple characteristics and underlying structure of data. In this context, it becomes an important topic in the data mining field that how to effectively construct an efficient knowledge discovery and mining model. Ensemble learning, as one research hot spot, aims to integrate data fusion, data modeling, and data mining into a unified framework. Specifically, ensemble learning firstly extracts a set of features with a variety of transformations. Based on these learned features, multiple learning algorithms are utilized to produce weak predictive results. Finally, ensemble learning fuses the informative knowledge from the above results obtained to achieve knowledge discovery and better predictive performance via voting schemes in an adaptive way. In this paper, we review the research progress of the mainstream approaches of ensemble learning and classify them based on different characteristics. In addition, we present challenges and possible research directions for each mainstream approach of ensemble learning, and we also give an extra introduction for the combination of ensemble learning with other machine learning hot spots such as deep learning, reinforcement learning, etc.
Based on research into the applications of artificial intelligence (AI) technology in the manufacturing industry in recent years, we analyze the rapid development of core technologies in the new era of ‘Internet plus AI’, which is triggering a great change in the models, means, and ecosystems of the manufacturing industry, as well as in the development of AI. We then propose new models, means, and forms of intelligent manufacturing, intelligent manufacturing system architecture, and intelligent man-ufacturing technology system, based on the integration of AI technology with information communications, manufacturing, and related product technology. Moreover, from the perspectives of intelligent manufacturing application technology, industry, and application demonstration, the current development in intelligent manufacturing is discussed. Finally, suggestions for the appli-cation of AI in intelligent manufacturing in China are presented.
The CRISPR-Cas9 system, naturally a defense mechanism in prokaryotes, has been repurposed as an RNA-guided DNA targeting platform. It has been widely used for genome editing and transcriptome modulation, and has shown great promise in correcting mutations in human genetic diseases. Off-target effects are a critical issue for all of these applications. Here we review the current status on the target specificity of the CRISPR-Cas9 system.
The antibody-drug conjugate (ADC), a humanized or human monoclonal antibody conjugated with highly cytotoxic small molecules (payloads) through chemical linkers, is a novel therapeutic format and has great potential to make a paradigm shift in cancer chemotherapy. Thisnewantibody-based molecular platform enables selective delivery of a potent cytotoxic payload to target cancer cells, resulting in improved efficacy, reduced systemic toxicity, and preferable pharmacokinetics (PK)/ pharmacodynamics (PD) and biodistribution compared to traditional chemotherapy. Boosted by the successes of FDA-approved Adcetris® and Kadcyla®, this drug class has been rapidly growing along with about 60 ADCs currently in clinical trials. In this article, we briefly review molecular aspects of each component (the antibody, payload, and linker) of ADCs, and then mainly discuss traditional and new technologies of the conjugation and linker chemistries for successful construction of clinically effective ADCs. Current efforts in the conjugation and linker chemistries will provide greater insights into molecular design and strategies for clinically effective ADCs from medicinal chemistry and pharmacology standpoints. The development of site-specific conjugation methodologies for constructing homogeneousADCs is an especially promising path to improving ADC design, which will open the way for novel cancer therapeutics.
Conversational systems have come a long way since their inception in the 1960s. After decades of research and development, we have seen progress from Eliza and Parry in the 1960s and 1970s, to task-completion systems as in the Defense Advanced Research Projects Agency (DARPA) communicator program in the 2000s, to intelligent personal assistants such as Siri, in the 2010s, to today’s social chatbots like XiaoIce. Social chatbots’ appeal lies not only in their ability to respond to users’ diverse requests, but also in being able to establish an emotional connection with users. The latter is done by satisfying users’ need for communication, affection, as well as social belonging. To further the advancement and adoption of social chatbots, their design must focus on user engagement and take both intellectual quotient (IQ) and emotional quotient (EQ) into account. Users should want to engage with a social chatbot; as such, we define the success metric for social chatbots as conversation-turns per session (CPS). Using XiaoIce as an illustrative example, we discuss key technologies in building social chatbots from core chat to visual awareness to skills. We also show how XiaoIce can dynamically recognize emotion and engage the user throughout long conversations with appropriate interpersonal responses. As we become the first generation of humans ever living with artificial intelligenc (AI), we have a responsibility to design social chatbots to be both useful and empathetic, so they will become ubiquitous and help society as a whole.
Coronavirus disease 2019 (COVID-19) caused by severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) has posed a significant threat to global health. It caused a total of 80 868 confirmed cases and 3101 deaths in Chinese mainland until March 8, 2020. This novel virus spread mainly through respiratory droplets and close contact. As disease progressed, a series of complications tend to develop, especially in critically ill patients. Pathological findings showed representative features of acute respiratory distress syndrome and involvement of multiple organs. Apart from supportive care, no specific treatment has been established for COVID-19. The efficacy of some promising antivirals, convalescent plasma transfusion, and tocilizumab needs to be investigated by ongoing clinical trials.
The prevalence of obesity among children and adolescents (aged 2–18 years) has increased rapidly, with more than 100 million affected in 2015. Moreover, the epidemic of obesity in this population has been an important public health problem in developed and developing countries for the following reasons. Childhood and adolescent obesity tracks adulthood obesity and has been implicated in many chronic diseases, including type 2 diabetes, hypertension, and cardiovascular disease. Furthermore, childhood and adolescent obesity is linked to adulthood mortality and premature death. Although an imbalance between caloric intake and physical activity is a principal cause of childhood and adolescent obesity, environmental factors are exclusively important for development of obesity among children and adolescents. In addition to genetic and biological factors, socioenvironmental factors, including family, school, community, and national policies, can play a crucial role. The complexity of risk factors for developing obesity among children and adolescents leads to difficulty in treatment for this population. Many interventional trials for childhood and adolescent obesity have been proven ineffective. Therefore, early identification and prevention is the key to control the global epidemic of obesity. Given that the proportion of overweight children and adolescents is far greater than that of obesity, an effective prevention strategy is to focus on overweight youth, who are at high risk for developing obesity. Multifaceted, comprehensive strategies involving behavioral, psychological, and environmental risk factors must also be developed to prevent obesity among children and adolescents.
Artificial intelligence (AI) has been developing rapidly in recent years in terms of software algorithms, hardware implementation, and applications in a vast number of areas. In this review, we summarize the latest developments of applications of AI in biomedicine, including disease diagnostics, living assistance, biomedical information processing, and biomedical research. The aim of this review is to keep track of new scientific accomplishments, to understand the availability of technologies, to appreciate the tremendous potential of AI in biomedicine, and to provide researchers in related fields with inspiration. It can be asserted that, just like AI itself, the application of AI in biomedicine is still in its early stage. New progress and breakthroughs will continue to push the frontier and widen the scope of AI application, and fast developments are envisioned in the near future. Two case studies are provided to illustrate the prediction of epileptic seizure occurrences and the filling of a dysfunctional urinary bladder.
The COVID-19 outbreak is a global crisis that has placed small and medium enterprises (SMEs) under huge pressure to survive, requiring them to respond effectively to the crisis. SMEs have adopted various digital technologies to cope with this crisis. Using a data set from a survey with 518 Chinese SMEs, the study examines the relationship between SMEs’ digitalization and their public crisis responses. The empirical results show that digitalization has enabled SMEs to respond effectively to the public crisis by making use of their dynamic capabilities. In addition, digitalization can help improve SMEs’ performance. We propose a theoretical framework of digitalization and crisis responses for SMEs and present three avenues for future research.
The real-time reverse transcription-polymerase chain reaction (RT-PCR) detection of viral RNA from sputum or nasopharyngeal swab had a relatively low positive rate in the early stage of coronavirus disease 2019 (COVID-19). Meanwhile, the manifestations of COVID-19 as seen through computed tomography (CT) imaging show individual characteristics that differ from those of other types of viral pneumonia such as influenza-A viral pneumonia (IAVP). This study aimed to establish an early screening model to distinguish COVID-19 pneumonia from IAVP and healthy cases through pulmonary CT images using deep learning techniques. A total of 618 CT samples were collected: 219 samples from 110 patients with COVID-19 (mean age 50 years; 63 (57.3%) male patients); 224 samples from 224 patients with IAVP (mean age 61 years; 156 (69.6%) male patients); and 175 samples from 175 healthy cases (mean age 39 years; 97 (55.4%) male patients). All CT samples were contributed from three COVID-19-designated hospitals in Zhejiang Province, China. First, the candidate infection regions were segmented out from the pulmonary CT image set using a 3D deep learning model. These separated images were then categorized into the COVID-19, IAVP, and irrelevant to infection (ITI) groups, together with the corresponding confidence scores, using a location-attention classification model. Finally, the infection type and overall confidence score for each CT case were calculated using the Noisy-OR Bayesian function. The experimental result of the benchmark dataset showed that the overall accuracy rate was 86.7% in terms of all the CT cases taken together. The deep learning models established in this study were effective for the early screening of COVID-19 patients and were demonstrated to be a promising supplementary diagnostic method for frontline clinical doctors.
With the popularization of the Internet, permeation of sensor networks, emergence of big data, increase in size of the information community, and interlinking and fusion of data and information throughout human society, physical space, and cyberspace, the information environment related to the current development of artificial intelligence (AI) has profoundly changed. AI faces important adjustments, and scientific foundations are confronted with new breakthroughs, as AI enters a new stage: AI 2.0. This paper briefly reviews the 60-year developmental history of AI, analyzes the external environment promoting the formation of AI 2.0 along with changes in goals, and describes both the beginning of the technology and the core idea behind AI 2.0 development. Furthermore, based on combined social demands and the information environment that exists in relation to Chinese development, suggestions on the development of AI 2.0 are given.
Microbes appear in every corner of human life, and microbes affect every aspect of human life. The human oral cavity contains a number of different habitats. Synergy and interaction of variable oral microorganisms help human body against invasion of undesirable stimulation outside. However, imbalance of microbial flora contributes to oral diseases and systemic diseases. Oral microbiomes play an important role in the human microbial community and human health. The use of recently developed molecular methods has greatly expanded our knowledge of the composition and function of the oral microbiome in health and disease. Studies in oral microbiomes and their interactions with microbiomes in variable body sites and variable health condition are critical in our cognition of our body and how to make effect on human health improvement.
Photosynthetic microorganisms are important bioresources for producing desirable and environmentally benign products, and photobioreactors (PBRs) play important roles in these processes. Designing PBRs for photocatalysis is still challenging at present, and most reactors are designed and scaled up using semi-empirical approaches. No appropriate types of PBRs are available for mass cultivation due to the reactors’ high capital and operating costs and short lifespan, which are mainly due to a current lack of deep understanding of the coupling of light, hydrodynamics, mass transfer, and cell growth in efficient reactor design. This review provides a critical overview of the key parameters that influence the performance of the PBRs, including light, mixing, mass transfer, temperature, pH, and capital and operating costs. The lifespan and the costs of cleaning and temperature control are also emphasized for commercial exploitation. Four types of PBRs—tubular, plastic bag, column airlift, and flat-panel airlift reactors are recommended for large-scale operations. In addition, this paper elaborates the modeling of PBRs using the tools of computational fluid dynamics for rational design. It also analyzes the difficulties in the numerical simulation, and presents the prospect for mechanism-based models.
Most olefins (e.g., ethylene and propylene) will continue to be produced through steam cracking (SC) of hydrocarbons in the coming decade. In an uncertain commodity market, the chemical industry is investing very little in alternative technologies and feedstocks because of their current lack of economic viability, despite decreasing crude oil reserves and the recognition of global warming. In this perspective, some of the most promising alternatives are compared with the conventional SC process, and the major bottlenecks of each of the competing processes are highlighted. These technologies emerge especially from the abundance of cheap propane, ethane, and methane from shale gas and stranded gas. From an economic point of view, methane is an interesting starting material, if chemicals can be produced from it. The huge availability of crude oil and the expected substantial decline in the demand for fuels imply that the future for proven technologies such as Fischer-Tropsch synthesis (FTS) or methanol to gasoline is not bright. The abundance of cheap ethane and the large availability of crude oil, on the other hand, have caused the SC industry to shift to these two extremes, making room for the on-purpose production of light olefins, such as by the catalytic dehydrogenation of propane.
Ultrasound (US) has become one of the most commonly performed imaging modalities in clinical practice. It is a rapidly evolving technology with certain advantages and with unique challenges that include low imaging quality and high variability. From the perspective of image analysis, it is essential to develop advanced automatic US image analysis methods to assist in US diagnosis and/or to make such assessment more objective and accurate. Deep learning has recently emerged as the leading machine learning tool in various research fields, and especially in general imaging analysis and computer vision. Deep learning also shows huge potential for various automatic US image analysis tasks. This review first briefly introduces several popular deep learning architectures, and then summarizes and thoroughly discusses their applications in various specific tasks in US image analysis, such as classification, detection, and segmentation. Finally, the open challenges and potential trends of the future application of deep learning in medical US image analysis are discussed.