AI-based robots in industrialized building manufacturing

Mengjun WANG , Jiannan CAI , Da HU , Yuqing HU , Zhu HAN , Shuai LI

Front. Eng ›› 2025, Vol. 12 ›› Issue (1) : 59 -85.

PDF (2642KB)
Front. Eng ›› 2025, Vol. 12 ›› Issue (1) : 59 -85. DOI: 10.1007/s42524-025-4099-x
Construction Engineering and Intelligent Construction
REVIEW ARTICLE

AI-based robots in industrialized building manufacturing

Author information +
History +
PDF (2642KB)

Abstract

Industrialized buildings, characterized by off-site manufacturing and on-site installation, offer notable improvements in efficiency, cost-effectiveness, and material use. This transition from traditional construction methods not only accelerates building processes but also enhances working efficiencies globally. Despite its widespread adoption, the performance of industrialized building manufacturing (IBM) can still be optimized, particularly in enhancing time efficiency and reducing costs. This paper explores the integration of Artificial Intelligence (AI) and robotics at IBM to improve efficiency, cost-effectiveness, and material use in off-site assembly. Through a narrative literature review, this study systematically categorizes AI-based Robots (AIRs) applications into four critical stages—Cognition, Communication, Control, and Collaboration and Coordination, and then investigates their application in the factory assembly process for industrialized buildings, which is structured into distinct stages: component preparation, sub-assembly, main assembly, finishing tasks, and quality control. Each stage, from positioning components to the integration of larger modules and subsequent quality inspection, often involves robots or human-robot collaboration to enhance precision and efficiency. By examining research from 2014 to 2024, the review highlights the significant improvements AI-based robots have introduced to the construction sector, identifies existing challenges, and outlines future research directions. This comprehensive analysis aims to establish more efficient, precise, and tailored construction processes, paving the way for advanced IBM.

Graphical abstract

Keywords

industrialized building / off-site assembly / automated factory assembly / AI applications in construction / robot manufacturing

Cite this article

Download citation ▾
Mengjun WANG, Jiannan CAI, Da HU, Yuqing HU, Zhu HAN, Shuai LI. AI-based robots in industrialized building manufacturing. Front. Eng, 2025, 12(1): 59-85 DOI:10.1007/s42524-025-4099-x

登录浏览全文

4963

注册一个新账户 忘记密码

1 Introduction

Industrialized construction, also known as off-site construction or prefabrication, shifts the 85%–90% focus of building process from on-site to off-site, where components are mass-produced, assembled in factories, and shipped to the site for installation. This method reduces construction time, costs, and material waste compared to traditional methods (Thai et al., 2020, He et al., 2021). With increasing demand for efficiency (Ferdous et al., 2019), this technology is getting attention worldwide, including in Hong Kong, China, Malaysia, Singapore, and the United States, where investments in industrialized construction surpassed $1 billion in 2018 (Pullen et al., 2019, Thai et al., 2020, Ekanayake et al., 2021).

However, some studies have reported the current performance of industrialized buildings in real applications could be improved further in time efficiency and cost (Ferdous et al., 2019, Karthik et al., 2020). This could be attributed to the following several reasons: firstly, the lack of unified technical guidance for industrialized construction; secondly, human labor productivity is low as there is a lack of technical experts; thirdly, the dependence on the traditional methods and the factory space underutilization. In this context, employing AIR in off-site prefabrication can address these issues and improve efficiency while reduce errors in the controlled factory environment (Pfeiffer, 2016). AIR combine robotic systems with AI technologies, such as computer vision, algorithm optimization, and data processing, enabling them to perform tasks with higher efficiency, precision, and adaptability. This synergy allows robots to excel in diverse applications, from welding and assembly in the automotive and electronics industries to 3D printing in manufacturing, which requires high accuracy (Bhatt et al., 2020, Zhu et al., 2020). Recognizing the potential of AIR, researchers have shown significant interest in applying these technologies within industrialized construction processes (Baduge et al., 2022, Pan et al., 2022). This includes a range of innovative practices, such as non-standard timber fabrication and assembly using robotic technologies (Willmann et al., 2016, Eversmann et al., 2017), in situ fabrication of concrete structures through robotic intervention (Hack et al., 2020), optimization of geometry partitioning and material distribution within prefabricated components (Buchanan and Gardner, 2019, Parisi et al., 2024), computerized design and digital fabrication (Huang and Fan, 2018, He et al., 2021). These research efforts collectively aim to enhance efficiency, precision, and customization in construction, indicating a transformative shift toward industrialized building.

With the promising future of AIR in industrialized buildings, many scholars have made some summarized outlooks in this field. For example, Wang et al. (2020b) reviewed the adoption of digital technologies for off-site modular buildings and assessed different technologies to be applied to off-site modular buildings. Tehrani et al. (2023) focused on the use of robots in off-site assembly and on-site installation, suggesting a broader scope for robotic involvement in construction. Pan et al. (2022) investigated the roles of AIR in modular construction, particularly for off-site applications, highlighting the transformative potential of integrating AI within robotic systems applied in IBM. Fu et al. (2024) delved into human-robot collaboration in the manufacturing of modular buildings, a key aspect of modern construction practices. Zhang et al. (2023a) shifted the focus to on-site construction, evaluating how human-robot cooperation can enhance efficiency. Gusmao Brissi et al. (2022) discussed how robotic systems can effectively communicate and contribute to cost-effective modular building processes. These collective insights underscore the growing importance of AIR of improving the industrialized construction. However, there is no systematic literature review on off-site IBM with the help of AIR, which has great potential to be further improved to increase efficiency (Nam et al., 2020, Zhang et al., 2020). Therefore, this study aims to address this knowledge gap with the following objectives:

(1) Perform a narrative literature review (NLR) to summarize existing AIR techniques in different fields that could be potentially used in IBM.

(2) Develop a mapping system that correlates potential AIR techniques with their applications in IBM.

(3) Identify the challenges and offer a future perspective on the use of AIR in IBM.

2 Methodology

2.1 Framework

The factory assembly process for industrialized buildings can be structured into distinct stages, beginning with component preparation, which involves positioning components, identifying construction types, and determining assembly sequences and interrelationships (Viana et al., 2017). The process then advances to sub-assembly and main assembly stages. In the sub-assembly stage, smaller components are put together, often with the assistance of robots or through human-robot collaboration for tasks such as assembling windows or wall panels. The main assembly stage involves integrating these smaller components into larger modules, like entire walls or floors, typically handled by industrial robots due to the weight of these modules, with human workers providing supervision. After assembly, finishing tasks and quality control, including painting, sealing, and inspections, are conducted, often automated by robots (see Fig.1).

This paper focuses on the off-site assembly process in IBM, where construction robots—such as mobile robots for material transportation and robotic arms for assembly tasks—play a critical role. AI enhances these processes, enabling robots to perceive their environment, optimize control through advanced path and motion planning, and facilitate better coordination and collaboration. By enhancing robots' ability to autonomously navigate and interact with their surroundings, AI ensures precise and efficient operations within the factory setting. Optimizing path planning and motion control reduces errors and improves workflow in off-site assembly, while AI-driven coordination allows for synchronized and efficient task execution— essential in the complex environment of industrialized construction. To systematically explore the role of AIR in IBM, this review is structured into four key modules: cognition, communication, control, and coordination and collaboration. This division is rooted in the core characteristics of AI-aided robots and their relevance to the manufacturing process. As shown in Fig.2, these 4 modules are interrelated and together provide a comprehensive framework for understanding the integration of AIR in IBM.

The cognition module covers the cognitive techniques that empower AIR with the ability to perceive, understand, and interact with their environments. This module forms the foundation of this review as it enables intelligent robot to operate autonomously and make decisions based on sensory inputs and data analysis. Communication module is closely related to cognition, as the data exchanged is often used for cognitive processing and decision-making. This module explores core communication protocols, fieldbus and network integration systems, and interfaces that facilitate seamless information exchange in industrialized building processes. It is essential for the effective integration of various robotic systems and human-robot interactions. While the control module encompasses the mechanisms by which robots execute precise movements, optimize path and motion planning, and adapt to changing environments. This module is crucial for ensuring that robots can carry out tasks with the required precision and efficiency in industrialized building settings. Control is dependent on both cognition (for decision-making) and communication (for receiving and sending commands). Building upon the foundation of cognition, communication, and control which are necessary for effective teamwork and autonomous operation, coordination and collaboration module focuses on how robots work together in a coordinated manner, optimizing resource allocation and managing conflicts, while covering human-robot collaboration.

2.2 Method

Inspired by previous studies (Asghari et al., 2022, Fu et al., 2024), this paper employs a NLR methodology to comprehensively examine the four key domains identified in Section 2.1, where Fig.3 provides an overview of the review analysis framework. The review process involves an in-depth exploration of the principles of AIR, their applications, and the specific challenges they face within IBM. By analyzing these challenges in conjunction with recent advancements in AIR, the paper provides insights into the future potential of these technologies in IBM.

The literature review follows a keyword-based search strategy, with Google Scholar serving as the primary search engine. Adapted of the keywords selection method from (Cai et al., 2023a), the keywords relevant to the research technologies, functionalities, and applications are outlined in Tab.1. Specifically, the keywords used for searching are the combination results of each list, for example, “Robots task allocation in modular building.” The search results were manually filtered by reading the titles and abstracts to ensure relevance to the study’s objectives as well as the selected scopes. Many related articles are included by reading the main text of filtered search results. To focus on the most current developments, the review was limited to publications from 2014 to 2024. Ultimately, a total of 227 papers were selected for detailed analysis and discussion.

The structure of the paper is as follows: Section 3 delves into the four identified domains, discussing the fundamental principles of AIR, their applications in IBM. Section 4 discusses the potential for further enhancement within each domain. Section 5 concludes the paper by summarizing the key discussions and offering insights for the effective adoption of AIR in IBM.

3 Findings

3.1 Cognition

Cognition in robots is primarily concerned with the robot’s understanding of its environment, facilitated by sensors that collect environmental data for processing and interpretation. In the assembly process, AI helps identify block types and detect their positions, laying the foundation for optimizing assembly sequences and determining robot target positions (Back et al., 2020). For finishing tasks, cognition aids in identifying locations requiring painting or sealing, and in quality supervision, it detects incorrect installations or cracks, contributing to safe and accurate assembly (Brosque et al., 2020). In workflows involving human interaction, such as human-robot collaboration, detecting human positions, behaviors, intentions, and recognizing their commands is crucial for robots scene understanding (Liang et al., 2021). Section 3.1.1 thoroughly reviewed the advanced cognition techniques including the hardware capturing sensing data and the software techniques for processing them. It then focuses on the specific applications of these technologies in modular building assembly in section 3.1.2.

3.1.1 Advanced cognitive techniques

This section details the sensory hardware that forms the foundation of robotic cognitive systems. At the same time, the software algorithms and how they leverage the collected sensor data to endow robots with cognitive capabilities, such as understanding complex scenes, recognizing objects, and navigating construction sites with precision, are also detailed in this section. Fig.4 shows the framework of Section 3.1.1 and the interrelationship between hardware and software.

Hardware: Sensors

In cognitive systems for construction robots, an array of sensors plays a pivotal role in environment perception. LiDAR sensors are fundamental in generating real-time 3D maps, enabling robots to detect obstacles and navigate sites with remarkable precision (Muñoz-Bañón et al., 2022). Cameras, in both 2D and 3D variants, offer a rich visual understanding of the construction environment, which is essential for tasks like material identification (Zhang et al., 2018) and progress monitoring (Ahmadian Fard Fini et al., 2021, Moragane et al., 2022). Radar is useful for mobile perception, especially in adverse weather conditions (Wang et al., 2023a). Acoustic sensors or microphones capture sound patterns, enabling robots to respond to auditory signals or detect abnormalities in machines through sound analysis (Rao et al., 2022).

Besides these fundamental sensors, there are many other perception sensors useful in recognizing targets to be manipulated by robots. Robotic perception systems integrating ultrasonic sensors with flexible triboelectric sensors detect object shape and distance through reflected ultrasound, positioning robotic manipulators accurately for object grasping (Yasin et al., 2021, Shi et al., 2023b). Force and torque sensors are critical for enabling precise control during tasks such as lifting, placing, or fastening components, ensuring accuracy and safety (Li et al., 2020, Cao et al., 2021). Proximity sensors help robots detect the presence of objects or surfaces without direct contact (Koyama et al., 2018), which is essential for avoiding collisions and ensuring proper alignment of components during assembly. Infrared sensors detect heat signatures, which are useful in monitoring the curing process of materials like concrete or in detecting overheating components (Yumnam et al., 2021, Singla et al., 2023), thereby preventing potential hazards and ensuring the quality of the construction process.

For robots in other fields, tactile sensing has gained attention for its efficiency and low latency. Multimodal robot skin cells, networked to form large-area skin patches, cover robot surfaces, enabling efficient handling of vast amounts of tactile data (Zhou et al., 2024). Innovations like robotic e-skin composed of resistive tactile force and temperature sensors monitor the sliding and slipping movements of objects, enhancing soft grippers’ dexterity (Yamaguchi et al., 2019, Bao et al., 2023). The development of compliant “skins” for humanoids integrates distributed pressure sensors based on capacitive technology, allowing deployment on nonflat surfaces (Mishra et al., 2021). Novel fabric-based, flexible, and stretchable tactile sensors can seamlessly cover natural shapes, providing a more intuitive sense of touch (Pyo et al., 2021).

Beyond sensors directly attached to robots, ambient intelligence also enhances robotic perception. Multi-agent robot-sensor networks, including distributed cameras, environmental sensors (e.g., temperature, humidity, light), and motion detectors (Prati et al., 2019), are employed for surveillance and monitoring of human living and working environments. These sensors gather detailed environmental data, which helps robots understand and navigate their surroundings more effectively (Dong et al., 2019). This integration allows robots to adapt to dynamic conditions, detect changes in their environment, and respond accordingly, improving their ability to perform complex tasks in varied settings.

Software: Cognition Techniques in Robots

This subsection delves into the techniques that enable robots understand their surroundings, focusing on environment modeling, object detection and classification, and human-related cognition.

a) Environment modeling

Environmental modeling is the cornerstone of robotic understanding and navigating the surrounding world, covering multiple dimensions from object recognition to two-dimensional image segmentation and three-dimensional point cloud processing. Object recognition, an essential part of environmental modeling, empowers robots to comprehend their surroundings by identifying and manipulating objects. Deep learning has significantly enhanced object detection capabilities, particularly through the application of Convolutional Neural Networks (CNNs) (Muhammad et al., 2021). This advancement is evident in algorithms like R-CNN (Zhao and Cheah, 2023), YOLO (You Only Look Once) (Li et al., 2023c), and SSD (Single Shot MultiBox Detector) (Liu et al., 2021a), which have become benchmarks in the field. Object classification is seamlessly integrated into this process, with CNNs extracting image features through layered network architectures (Ku et al., 2021). Prominent models such as VGG (Chen and Guhl, 2018), ResNet (Hou et al., 2020), and Inception (Xiao and Kang, 2021) have tackled issues like gradient vanishing and model overfitting, thus enhancing classification performance significantly.

In the two-dimensional aspect, image segmentation technology is key to environmental understanding, promoting the robot’s recognition of the environment by distinguishing objects from the background in images (Wang et al., 2022b, Bai et al., 2023). Traditional methods such as threshold segmentation (Pare et al., 2020), region growing (Yeom et al., 2017), and edge detection (Kheradmandi and Mehranfar, 2022), although simple and efficient, are limited by image quality (Wang et al., 2020d). In recent years, the rise of deep learning technology, such as Fully Convolutional Networks (FCNs) (Tulbure et al., 2022) and U-Net (Vasquez et al., 2024), has greatly improved the accuracy and efficiency of image segmentation. Especially, the introduction of attention mechanisms (Huang et al., 2022, Zhang et al., 2024) and Transformer structures (Wang et al., 2023c) has further enhanced the performance of segmentation models. Recently, image segmentation algorithms that combine large language models have shown the potential to maintain high performance with a small amount of labeled data (Wang et al., 2023b, Yun et al., 2023).

In the three-dimensional aspect, the acquisition and processing of point cloud data are fundamental to constructing detailed three-dimensional models of the environment surrounding the robot. Technologies such as laser scanning (Hu et al., 2023), stereo vision (Yuan et al., 2022), and structured light scanning (Lam et al., 2022) provide depth information of the environment from different perspectives. The preprocessing of point cloud data, including filtering, registration, and segmentation (Wang et al., 2020c, Li et al., 2021), provides accurate data for subsequent environmental understanding. The application of deep learning in point cloud processing, such as PointNet (Kim et al., 2022) and PointNet + + (Yin et al., 2021), as well as models based on Transformers (Zhou et al., 2022), has demonstrated powerful capabilities in handling three-dimensional data, providing effective technical support for robots' autonomous navigation and task execution in complex environments (Chen et al., 2019). Furthermore, Simultaneous Localization and Mapping (SLAM) allows a robot to build a map of an unknown environment while simultaneously keeping track of its own location within that map (Kim et al., 2018).

b) Human-related cognition

Cognition related to humans is also an essential component of robotic cognition systems. It involves recognizing human activities, detecting human bodies, understanding intentions, and recognizing gestures and voice commands. This section will detail the human-related cognition and its applications.

First, Human Activity Recognition (HAR) is a technology that identifies human behaviors by analyzing sensor data, video, and other multimodal information (Li et al., 2023b). Deep learning models such as CNNs (Slaton et al., 2020, Sherafat et al., 2022) and Recurrent Neural Networks (RNNs) (Guo et al., 2023, Younesi Heravi et al., 2024) have been widely applied in HAR, improving the accuracy and efficiency of recognition. Graph Convolutional Networks (GCNs) are used for processing skeletal data, enhancing the accuracy of activity recognition by modeling the connections between human joints (Shi et al., 2020).

Human body detection technology shares similarities with object detection in environmental modeling. However, human body detection faces unique challenges, such as occlusion, pose variation, and scale changes (Cheltha et al., 2024). To address these issues, specialized human body detection methods have been proposed. For example, pose estimation-based methods like OpenPose detect human poses by identifying key points of the body (Urgo et al., 2024), part-based models like Deformable Part Models (DPMs) (Chen et al., 2020), which decompose the human body into multiple parts and detect the position of each part to cope with pose variation and occlusion.

In understanding human intentions, voice recognition technology plays a key role. In addition to traditional RNNs, Transformer models like BERT and GPT have been used for voice recognition (Acheampong et al., 2021, Zheng et al., 2021), improving the understanding and accuracy of language content recognition. Another technique is end-to-end voice recognition systems like DeepSpeech (Hannun et al., 2014), which directly convert sound signals into text, reducing the dependency on traditional feature extraction steps. Similar to voice recognition, facial expression recognition can use deep learning-based methods, such as the Facial Action Coding System (FACS) combined with CNNs, to identify different facial action units and infer emotions (Deng et al., 2019). Body language analysis can utilize pose estimation technology like PoseNet to recognize human postures and infer intentions (Aonty et al., 2023). Gesture recognition technology is usually based on image recognition algorithms, such as CNNs. For example, using 3D CNNs to analyze time-series images can identify the dynamic features of gestures (Wang et al., 2023c). Eye-tracking techniques are also applied widely in tracking human gaze to infer their intention (Hasanzadeh et al., 2017, Guo et al., 2022). Another approach is based on hand keypoint detection technology like MediaPipe Hands, which recognizes specific gestures by identifying key points of the hand (Docekal et al., 2022).

3.1.2 Applications in IBM

Environmental modeling

In the field of off-site IBM, environmental modeling plays a crucial role in off-site assembly by enabling robots to deeply understand the surroundings, including identifying workstations, storage areas, and transport routes (Bowmaster and Rankin, 2019, Yin et al., 2019, Anane et al., 2022), thereby effectively planning their movements and operations. Additionally, environmental modeling technology allows robots to detect and avoid potential obstacles (Yang and Kang, 2021, Brosque et al., 2023), such as other robots, workers, or scattered materials, ensuring safety and smoothness on the production line. Using these models, robots can also perform precise path planning, reducing movement time and distance within the work area and thereby improving overall production efficiency.

Object recognition

Advanced deep learning models like Faster R-CNN and YOLOv3 enable robots to accurately identify and classify various building elements and materials in the factory (Riordan et al., 2019, Stjepandić et al., 2022). This capability accelerates the sorting and assembly process, enhancing the speed and accuracy of production. Furthermore, combined with pose estimation technology, robots can determine the optimal grasp points and operation positions based on the detected object’s shape and size (Dawod and Hanna, 2019; Li and Qiao, 2019; Qin et al., 2016), ensuring the correct assembly and fixation of components. Additionally, robots can use vision detection systems to monitor the assembly process in real-time, detecting defects and poor-quality products to ensure high product standards (Zheng et al., 2020, Moon et al., 2024). For quality control in modular construction, computer vision systems integrated with AI analyze images or video feeds to identify defects or deviations from expected standards (Bae and Han, 2021, Lee et al., 2022b). For example, AI-driven vision systems can inspect welds or paint quality on modular units, automatically flagging areas that do not meet the predefined standards, thereby enabling early corrective measures (Chen et al., 2021c, Golec et al., 2023, Jung et al., 2023).

Human-related cognition

Human-related cognition technologies enable robots to collaborate more effectively with factory workers and other robots (Evjemo et al., 2020). Through natural language processing, robots can understand and respond to workers' voice commands, reducing communication errors and improving work efficiency (Li et al., 2023a, Park et al., 2024). At the same time, robots can understand workers' emotions and intentions by analyzing their facial expressions and gestures, promoting smoother human-robot collaboration (de Gea Fernández et al., 2017, Wang et al., 2023c). Moreover, robots can monitor workers’ behavior and location, promptly identifying potential safety risks and taking measures to prevent accidents, enhancing safety on the production line (Gusmao Brissi et al., 2022, Hou et al., 2022).

3.2 Communication

Intelligent IBM requires effective communication between robots as well as between humans and robots. This includes sharing the motor status, joint position, and the results of cognition, such as the detected positions and types of objects (Kalinowska et al., 2023), which lays the foundation for subsequent control and coordination. In human-robot collaboration, detected human information, such as position, behavior, intentions, and commands, is also shared with robots to ensure safe control. The results of quality inspections are communicated to both robots and human supervisors to lay the groundwork for further assembly. Sections 3.2.1 to 3.2.2 first review the technical details of communication protocols within inter-robots and human-robots, then details the applications of communication in IBM (see Section 3.2.3).

3.2.1 Technology in robots communication

This section focuses on the technical aspects of robotic communication, both inter-robot and between robots and network systems, including basic communication protocols, field bus, and network systems, essential for enhancing robotic operations in industrial environments.

Core Communication Protocols

Core protocols such as Data Distribution Service (DDS), Robot Operating System (ROS), and Message Queuing Telemetry Transport (MQTT) are particularly important in robotic data exchanges (Liang et al., 2022). DDS is designed for real-time, high-performance data exchange using a publish-subscribe model (Lin and Lu, 2024). It supports complex data distribution topologies and a wide range of Quality of Service (QoS) parameters, including reliability, bandwidth control, and latency guarantees. These features make DDS indispensable in synchronized settings. For example, in automotive manufacturing, where multiple robots collaborate to complete complex tasks like body assembly and welding, DDS ensures that these collaborative actions are precisely synchronized within strict time windows, significantly enhancing production efficiency and product quality (Lin and Lu, 2024). ROS utilizes a publish-subscribe system via TCP or UDP, facilitating versatile data and service exchanges among various robot nodes (He et al., 2022). This setup enhances modularity and scalability in robotic applications, such as in logistics where different robotic units collaborate for inventory management and package sorting.

Meanwhile, MQTT thrives in environments with limited bandwidth and high network latency. It operates on a simple publisher-subscriber client-server model, efficiently managing network bandwidth and reducing the size of data packets (Jiang et al., 2022), crucial for enhancing the efficiency of remote communications. It is particularly effective in remote operation contexts, like petrochemical plants (Shu et al., 2018) and offshore oil rigs (Naranjo et al., 2018), where there is often a need to remotely monitor and control robotic operations in vast and complex industrial environments. MQTT ensures that operators, even from an office setting, can receive real-time updates on the status of robots on-site or send control commands swiftly in emergency situations, effectively preventing or responding to potential safety incidents.

DDS and MQTT are not only bridges for communication between robots but also seamlessly integrate with sensor networks, actuators, and factory management systems, enabling real-time data collection and processing and pushing forward the deep development of factory automation and intelligence. The integration of Internet of Things (IoTs) technology and 5G technology has expanded the use of these communication protocols, especially in smart manufacturing (Mahiri et al., 2022, Cheng et al., 2024). For instance, 5G technology ensures that robots can quickly respond to sudden commands or adjustments, optimizing production efficiency and minimizing downtime (Cheng et al., 2024). IoT technology, by connecting various sensors and actuators, not only facilitates real-time data collection and analysis on assembly lines but also enables real-time monitoring and optimization of resource use (Mustafha et al., 2020, Yuan et al., 2021). IoT technology is particularly important in installation tasks as it helps coordination systems track the status of various resources in real-time (Wu et al., 2022b), such as material usage rates, tool locations, and robot conditions.

Middleware technologies such as Apache Kafka (Lourenço et al., 2021) or RabbitMQ (Yoshino et al., 2021) bridge robots and operating systems, ensuring timely data transmission and maintaining data reliability and order, crucial for real-time control systems and automated monitoring. For example, in automated warehouse systems, Kafka can handle data streams from hundreds of sensors (Dymora et al., 2023), relaying this information in real-time to path planning systems to optimize robot trajectories. Through these protocols, the manufacturing industry can achieve higher levels of digitalization and networking, optimize production processes, reduce operational costs, and simultaneously enhance the quality of products and services.

Fieldbus and network integration systems

Fieldbus systems like CAN (Controller Area Network) and EtherCAT (Ethernet for Control Automation Technology), as well as wireless communication protocols, including Wi-Fi, Bluetooth, and Mesh network technologies are pivotal in robotic communications (Pereira et al., 2023).

The CAN bus allows multiple devices to communicate on the same network through a non-destructive arbitration method, which is ideal for automotive assembly where it coordinates communication between control units, motors, and sensors (Zhang, 2022). EtherCAT communicates data using standard Ethernet frames without the need for additional Ethernet excels in high-speed data transmission, crucial for precision tasks in automated packaging lines where rapid and accurate data exchange between cameras, sensors, and controllers is needed for flawless operation (Wang et al., 2017b).

Wireless protocols like Wi-Fi, Bluetooth, and Mesh network technologies support diverse robotic applications (Aijaz, 2021, Artetxe et al., 2023). Wi-Fi is suitable for connecting dispersed robotic systems in large factory environments due to its high throughput and extensive coverage capabilities (Ma et al., 2019). Bluetooth, known for its low energy consumption, is suitable for small, mobile robots or sensor networks in tasks such as material handling or inventory tracking in a warehouse (De Beelde et al., 2021, Bencak et al., 2022). Mesh networks offer robust connectivity in automated warehouses, enabling continuous communication between robots and ensuring that tasks like sorting and transporting goods are performed without interruptions (Nurlan et al., 2022).

3.2.2 Human-robot interaction interfaces

This section explores the interfaces that facilitate user interaction with robotic systems, focusing on enhancing intuitiveness and efficiency in communication between humans and robots.

Augmented Reality (AR) and Virtual Reality (VR) technologies significantly enhance the user experience in interacting with robotic systems by providing visual enhancements and fully virtual environments (Zhang et al., 2023b). AR overlays computer-generated images or information onto live video streams, allowing operators to see an enhanced version of the real world, which is especially useful in precision-demanding assembly or maintenance tasks, providing crucial visual assistance and data guidance. For example, in complex mechanical assembly, AR can display the precise location and installation steps of each component, helping operators accurately complete tasks (Cattari et al., 2020, Wang et al., 2021b). In contrast, VR creates a fully controlled simulated environment, allowing operators to practice and refine their skills without physical risk (Osti et al., 2021). It can also allow operators to manipulate robot tasks remotely by immersing them in virtual representations of the real-world task (Geng et al., 2022). Furthermore, VR also offers the potential for real-time simulation, enabling operators to test strategies and adjustments without interrupting live robotic operations (Wang et al., 2024).

Graphical User Interfaces (GUI) and haptic feedback technologies make robot operations more intuitive. GUIs provide a user-friendly visual interface for intuitively monitoring and controlling robots (Liu et al., 2021b), essential in remote surgery systems where precision and response time are critical. Haptic feedback technologies such as vibration feedback and force-feedback gloves provide essential tactile sensations that replicate the physical interaction with objects, enhancing user control in teleoperation scenarios where direct handling of materials or tools is required (Ozioko and Dahiya, 2022, Shi and Shen, 2024).

Natural Language Processing (NLP) simplifies human-machine interactions by allowing operators to communicate with robotic systems using natural language. NLP includes voice-to-text conversion, semantic understanding, and intent recognition (Wu et al., 2022a, Shamshiri et al., 2024), enabling users without technical backgrounds to control robots through simple voice commands, significantly reducing the complexity of operations and enhancing operational efficiency. For example, voice commands can be used to direct robots during disaster recovery operations (Gentile et al., 2023), allowing for quick and hands-free manipulation of robots in critical situations.

3.2.3 Applications in IBM

Efficient communication technologies are essential in ensuring the efficiency and precision of the assembly process of modular building construction. Communication within robots, between robots, and between humans and robots all play key roles.

Robot self-communication

In modular building assembly, the precise action of each robot is critical to the success of the entire assembly process. By utilizing efficient internal communication protocols such as CAN and EtherCAT, robots can synchronize their sensor readings and control commands in real-time (Etz et al., 2018, Martinova et al., 2023), ensuring that each step is executed with precision. For instance, when a robot installs large structural elements like wall panels or roof modules, the precise control of its movements depends on real-time feedback from joint sensors and torque sensors (Gawel et al., 2019, Apolinarska et al., 2021). This precise internal communication allows the robot to automatically adjust its movements to accommodate minor deviations at the joints, ensuring structural stability and alignment.

Communication between robots

Communication between robots is also crucial, especially in scenarios requiring multiple robots to collaborate on installing large modules (Augugliaro et al., 2014, Zhu et al., 2021). By using wireless communication technologies like Wi-Fi and Bluetooth, robots can share positioning data (Zhu et al., 2021), progress updates (Jiang et al., 2022), and real-time status information (M. Tehrani et al., 2023), ensuring consistency in collaboration during the assembly process. For example, one robot might position a prefabricated wall panel at a specified location while another robot handles fastening and welding. Through wireless networking technologies, these robots can exchange position and task status information in real-time, precisely coordinating their actions to optimize the efficiency and quality of the assembly process (Zhou et al., 2021, Ginigaddara et al., 2022).

Human-robot communication

In terms of human-machine interaction, technologies like AR, VR, NLP, and haptic feedback provide operators with unprecedented control and intuitiveness (Kramberger et al., 2022, Cheng et al., 2023). AR and VR technologies allow operators to visually comprehend the operations and anticipated paths of robots during complex assembly tasks, either through virtual overlays or complete immersion (Kaiser et al., 2021, Ginigaddara et al., 2022), which is particularly useful for ensuring the precise assembly of large components. For instance, enhance the planning process by allowing virtual simulations of planned paths (Wang et al., 2021a). This capability enables pre-execution validation of routes, ensuring that the paths are feasible and safe in real-world scenarios, thereby reducing operational risks and enhancing reliability. Additionally, NLP enables operators to communicate with robots through simple voice commands, allowing for more flexible and rapid adjustment of robot tasks or responses to sudden situations (Wang et al., 2022a, Wu et al., 2022a). Haptic feedback technology is especially critical in remote operations, providing operators with immediate physical feedback about the robot’s contact and operational status through vibrations or force-feedback gloves (Kaiser et al., 2021, Cheng et al., 2023), enhancing the safety and precision of operations.

3.3 Control

This section specifically refers to robotic control, including low-level control like precise control and adaptive control as well as high-level motion and path planning. These controls respond to environmental variances, human intentions, and activities, crucial for executing efficient, precise, and adaptable operations. Sections 3.3.1 to 3.3.2 discuss control techniques from different levels while section 3.3.3 details their applications in modular building assembly.

3.3.1 Low-level control

This module focuses on how robots execute precise and adaptive movements in response to internal and external factors.

Precise control

Precise control technologies are critical for enhancing the positional precision of robotic systems, particularly in low-level control concerning actuators and feedback mechanisms. These technologies are crucial in robotic systems performing fine operations such as assembly and welding.

Advanced servo control systems link the servo motors and a high-level control unit, such as a programmable logic controller (PLC) (Vogel-Heuser et al., 2017), adapting the original signal and forward it to the motor. The setup enables exceptional precise control of position, speed, and torque. Complementary to motor control, inverse kinematics (IK) (Zhou et al., 2023), which is a basic robotic arm control algorithm that solves for the joint angles that achieve the desired position and orientation of the robot’s end effector. This technique is vital for tasks where precise endpoint positioning is crucial, such as in assembly operations where components must be aligned with high precision.

Additionally, sliding mode control (SMC) alters its structure based on the state of the system and employs a discontinuous control action to ensure the system’s trajectory adheres to a predefined sliding surface despite system uncertainties or external disturbances (Baek and Kwon, 2020). For instance, it can control robotic arms with high accuracy in positioning despite dynamic loads or mechanical wear. Moreover, SMC is often employed in electric drives and actuators within robotic systems to ensure precise motion control.

Feedback-based control systems utilize representative algorithms such as Proportional-Integral-Derivative (PID) control, feedforward control, and state feedback control (Shi et al., 2023a). PID controller is a control loop feedback mechanism widely used in industrial control systems (Bazhanov et al., 2016). It continuously calculates an error value as the difference between a desired setpoint and a measured process variable. It then applies a correction based on proportional, integral, and derivative terms. PID is widely employed in robots to control the motors' speed and position precisely, such as maintaining the robotic arm’s joint at a set angle. Feedforward control uses a predetermined model of the system to predict the outputs based on command inputs (Li et al., 2022), allowing the system to anticipates the need for control action rather than merely reacting to sensory feedback. It’s commonly used in dynamic environment, such as adjusting the torque in robotic joints to handle varying loads dynamically. Furthermore, state feedback control incorporates the full state of the system into the controller to make decisions (Panda et al., 2017, Li et al., 2022). It typically uses all available state measurements to feed back into the system for more accurate control. It has been utilized in precision systems such as satellite attitude control, where complete knowledge of the state is necessary for effective control.

Adaptive control

Adaptive control is vital in environments where obstacles or conditions are unpredictable. This approach takes advantage of environmental perception, such as advanced sensor technologies and real-time data processing, allowing robots to accurately perceive and adapt to their surroundings.

RRT not only helps planning in static environments, but it also excels in dynamic and unpredictable environments by generating a search tree that randomly expands across the space, efficiently finding pathways in high-dimensional spaces (Mohammed et al., 2021, Qi et al., 2021). In an unstructured environment, an RRT might be used to plan the path of a robotic arm to pick up objects located at random positions, ensuring the path is feasible without colliding with unforeseen obstacles.

Besides this, there are many other algorithms that are good at robot adaptive control in dynamic environments, like potential fields (Vlantis et al., 2023), dynamic windows (Vlantis et al., 2023), and sequential quadratic programming (SQP) (Li et al., 2016). Potential fields create a virtual 'field' around obstacles where the force magnitude increases as the robot gets closer to the obstacle. The robot navigates by moving in the direction of the resultant force vector. It’s useful in mobile robots for environments like shopping centers, where people and obstacles are continuously moving. Dynamic window approach is a real-time collision avoidance method for mobile robots which could also help robots navigate the busy city streets. SQP is an iterative method for nonlinear optimization which is useful in dynamic environments where the robot must adjust its path in response to changing conditions, such as in automated warehouses.

With the advancement of machine learning, genetic algorithms could be used in exploration robots that need to continually adapt to new terrains and obstacles, such as planetary rovers (Hao et al., 2021). Model predictive control (MPC) uses a model of the robot’s dynamics to predict and optimize future movements over a moving time horizon (Zeng et al., 2024b). It updates the control inputs at each timestep. This technology is particularly effective in applications where the robotic arm needs to react to dynamic objects, or precise timing is crucial, such as in pick-and-place tasks involving moving conveyor belts (Alberto et al., 2023). It could also be used in autonomous driving to adaptively plan movements in response to rapidly changing traffic conditions (Lin et al., 2019). Recurrent Neural Networks (RNNs) and Hidden Markov Models (HMMs) are used to model time series data (Rahman et al., 2021), which helps in predicting future states of the environment based on historical patterns. It could be used in interaction-based applications where the robot needs to predict human actions. Dynamic decision-making in adaptive control utilizes statistical models and machine learning algorithms, such as Decision Trees (Dönmez and Kocamaz, 2020) and Conditional Random Fields (CRFs) (Zhou et al., 2019), allowing robots to respond effectively to explicit commands or detected obstacles. CRFs are particularly useful in predicting sequential data that align decision-making closely with observable environmental variables, helping in accurately mapping observed environmental variables to appropriate robot responses.

Reinforcement learning techniques also play a crucial role in refining the adaptive control strategies of robots by allowing them to learn from interactions with their environment (Yu et al., 2022). For example, policy gradient methods like Proximal Policy Optimization (PPO) (Cano Lopes et al., 2018) optimize the control policies by directly adjusting the parameters of the policy in a direction that improves performance, which is crucial in dynamic settings where the robot must adapt to sudden changes in the environment. While actor-critic models like Asynchronous Advantage Actor-Critic (A3C) is a dual-model approach where the ‘actor’ proposes actions, and the ‘critic’ evaluates them based on how good they are with respect to the current policy. This method is effective in environments where both the current state and the action have to be considered for making decisions (Zhang et al., 2019). These techniques continuously optimize control strategies to address human inputs and environmental shifts, enhancing the robot’s interactive and adaptive capabilities (Oliff et al., 2020).

3.3.2 Motion and path planning

Effective motion and path planning ensures that robotic systems execute tasks efficiently and accurately. Robotic arm motion planning focuses on the movement of joints to reach a desired position or perform a task with the end-effector. It deals with the articulation in joint space and must consider the kinematics and dynamics of the arm. Mobile robot path planning concerns the navigation of a robot through an environment from one point to another, often on a flat or varied terrain. It involves collision avoidance with static and dynamic obstacles and typically operates in a 2D or 3D Cartesian space. Regarding the planning algorithms for these two applications, they could mainly be divided into two types, including traditional methods and machine-learning-based methods.

Conventional approaches

For path planning, existing algorithms always connect it with diagrams like point and tree graphs. Graph-based search algorithms like the Dijkstra’s and A* are widely applied in the path planning field (Lotfi et al., 2021). They utilize a graph representation of the environment to find the shortest path between two points. For instance, these types of algorithms are usually employed in warehouse robots to navigate between shelves in a fixed layout (Zhang et al., 2022b). Complementary to the basic graph algorithms, visibility graphs connect all vertices directly to each other without intersecting obstacles to form a graph. It could be used in automated guided vehicles in factories where the layout is a constant. In addition, the Voronoi diagrams decompose the space into regions closest to a set of predefined points (Huang et al., 2021a). It’s widely used in path planning for agricultural robots to navigate between crop rows. The probabilistic roadmaps algorithm integrated with the pre-mapped roadmap (Chen et al., 2021a), is used for planning paths by capturing the connectivity of the robot’s free space through random sampling. Additionally, rapidly-exploring random tree (RRT) is an algorithm designed for efficiently searching high-dimensional spaces by randomly building a space-filling tree (Zhao et al., 2023). The tree starts from the initial state and expands toward randomly generated points in the state space, rapidly exploring the space while ensuring paths are feasible. This makes it suitable for planning in complex environments with present obstacles.

For motion planning, a robotic arm could also utilize RRT to determine a feasible path for the arm in complex environments, like reaching inside a cluttered space (Yuan et al., 2020). A* algorithm also finds the shortest path in a high-dimensional configuration space that a robotic arm’s end-effector should follow. Meanwhile, mathematical representations like polynomial splines are used to generate smooth trajectories (Choi et al., 2017). They are commonly used to interpolate the waypoints that a robot’s end effector must pass through, ensuring that the trajectory is continuous and differentiable. For example, polynomial splines are used in robotic surgery or the assembly of delicate components as the requirements of precision and smooth control. Dynamic Programming (DP) solves problems by breaking them down into simpler subproblems (Ferrentino et al., 2023). It is used in trajectory optimization to calculate the optimal sequence of actions from a given state to a goal state. It’s employed in tasks requiring optimal path decisions in complex environments, like robotic arms sorting items based on size and weight.

Machine-learning based methods

Machine learning-based models, like CNNs, could help real-time obstacle detection based on the environment image and help autonomous vehicle navigation (Santos and Victorino, 2021, Zeng et al., 2024a). While reinforcement learning trains agents to optimize actions based on rewards, learning from interactions with the environment (Cai et al., 2023b, Luo and Schomaker, 2023). It’s used in robotic arms to learn complex assembly tasks in environments modeled during training (Apolinarska et al., 2021). For instance, an industrial robotic arm learns over time the best strategies for assembling intricate electronics, adapting its movements to variations in component shapes and sizes based on past successes and failures. Also, reinforcement learning could generate the paths of automated guided vehicles in a factory based on the training rewards (Zhu and Zhang, 2021). Additionally, imitation learning or programming by expert demonstrations opens a new research area for optimizing robotic arm trajectories (Manuel Davila Delgado and Oyedele, 2022, Luo and Schomaker, 2023). Robots could learn work experiences from a human demonstrator under this scheme.

3.3.3 Applications in IBM

In the context of AIR-aided in-factory modular building assembly, the integration of advanced control policies and models plays a pivotal role in enhancing efficiency, precision, and adaptability.

Precision assembly

Precision assembly is critical in ensuring that structural components fit seamlessly to meet stringent quality standards. When placing components, advanced servo control systems are deployed to maneuver large structural elements such as beams and panels with high precision (Eversmann et al., 2017). These systems ensure exact placement, critical in structures requiring tight tolerances. SMC and state feedback control are crucial for operations that involve dynamic loads, such as automated welding (Rahul et al., 2018). SMC ensures the welding apparatus operates consistently under varying loads, while state feedback control adjusts in real-time to maintain the precision required for high-quality welds.

Material handling and robotic navigation

Efficient material handling and navigation are pivotal for timely and cost-effective assembly processes. Graph-based Search Algorithms like A* and Dijkstra’s optimize the paths for automated guided vehicles (AGVs) transporting materials across the factory floor (López et al., 2022, Zhang et al., 2022b). This application reduces material handling time and minimizes congestion. In environments with both static and dynamic obstacles, RRT algorithms plan efficient routes for robotic arms and mobile robots, ensuring that materials and tools are delivered precisely where needed without delays (Shu et al., 2022). Potential Fields and Sequential Quadratic Programming (SQP) manage the movement of multiple robots working in close proximity (Alonso-Mora et al., 2017). These techniques prevent potential collisions and ensure safe interactions between robots and human operators, crucial in crowded assembly environments (Hartmann et al., 2023).

Process optimization and adaptive control

Optimizing assembly processes and adapting to real-time changes in the production environment ensure operational resilience and efficiency. PID Control and Feedforward Control adjust the operation parameters of welding robots to adapt to varying joint requirements and material properties (Wang et al., 2020a). This ensures strong, consistent welds across different modules. Polynomial Splines are used to generate smooth trajectories for assembly tools, critical in processes like automated drilling or painting, where the quality of the finish is paramount (Sun et al., 2022). MPC and reinforcement learning dynamically adjust the assembly processes based on real-time feedback from sensors and ongoing performance data (Zhu et al., 2021, Liu et al., 2022, Zhang et al., 2022a). MPC predicts future system states to preemptively adjust machine settings, while reinforcement learning continually refines process strategies to improve efficiency and reduce waste.

3.4 Coordination and collaboration

Building upon the foundations laid in the previous sections on cognition, communication, and control, this section delves into the coordination and collaboration within robotic systems. It begins by sections 3.4.1 to 3.4.2 examining techniques for inter-robot coordination and the various levels of human-robot collaboration, each classified by the degree of autonomy involved. Then section 3.4.3 discusses these techniques’ applications in modular building assembly, showcasing how they enhance efficiency and accuracy in construction.

3.4.1 Coordination and collaboration in robotic systems

Efficient coordination and collaboration in robotic systems are essential in scenarios where harmonious robot interactions are critical, such as automated manufacturing lines, disaster response operations, and smart logistics (Soori et al., 2024). These systems require sophisticated algorithms and frameworks to ensure that multiple robots can work together seamlessly, executing complex tasks with high precision and adaptability. Fig.5 illustrates the algorithms and example applications for inter-robot coordination and human-robot collaboration.

Collaborative control frameworks

Collaborative control frameworks are essential in environments where multiple robots operate together via communication protocols like ROS and DDS, either through centralized or distributed systems, each offering unique benefits depending on the operational requirements.

In a centralized control system, a master robot takes charge of coordinating the actions of subordinate robots, making it ideal for environments that require high precision and synchronized activity. For example, in industrial manufacturing projects, a centralized system could oversee and manage the placement and assembly of large prefabricated segments, ensuring that each piece is joined in perfect alignment according to the specifications (Jose and Pratihar, 2016, Boccella et al., 2020). This centralized approach ensures that complex tasks are executed flawlessly and efficiently, reducing the risk of costly errors. Conversely, distributed control systems grant autonomy to individual robots, allowing them to make decisions based on real-time data from their immediate environment. This system is particularly beneficial in logistics and warehouse management, where robots independently navigate and sort packages, adapting their routes and tasks based on current demands and obstacles (Lu et al., 2014, Kattepur et al., 2018). Each robot operates based on localized decision-making processes that enhance flexibility and responsiveness, crucial for maintaining high efficiency in dynamic environments.

For multi-agent coordination, self-organizing systems enable robots to work toward collective goals without centralized oversight (Du et al., 2019, M. Tehrani et al., 2023). An application of this is in drone swarms used for agricultural monitoring or environmental assessments, where each drone autonomously adjusts its flight path in response to the swarm’s collective behavior and external environmental factors (Albani et al., 2017). Such systems rely on local interactions and real-time data exchange, allowing the swarm to cover large areas more efficiently than single drones operating independently.

These collaborative systems also incorporate consensus algorithms like Raft and Paxos to ensure that all decisions and data exchanges within the network are consistent and reliable (Carrara et al., 2020), which is crucial for critical applications such as financial services or infrastructure monitoring where errors can have significant repercussions.

Integrated scheduling

Integrated scheduling is critical for optimizing the distribution of tasks among robots in automated manufacturing lines or other complex environments (Rahman et al., 2020). Techniques like linear programming and network flow analysis play crucial roles in these settings.

Linear programming helps define linear relationships through objective functions and constraints, effectively minimizing time and costs by ensuring that robots are assigned to tasks where they are most needed (Kolakowska et al., 2014). For instance, in an automotive assembly line, linear programming can sequence robot tasks to streamline the installation of parts (Zhang et al., 2021), ensuring efficient use of time and robotic resources. Network flow analysis complements this by mapping the dependencies between tasks, ensuring that the entire production process is optimized (Lerlertpakdee et al., 2014). This method is vital in scenarios like electronic component manufacturing, where precise timing and order of assembly steps are critical to the product’s integrity and functionality.

Furthermore, data streams also help build an integrated scheduling schema. The real-time sensor data feedback dynamically adjusts scheduling strategies to effectively respond to unexpected events and changes during production (Syafrudin et al., 2018). For example, if a robot malfunctions, the system can quickly reassign tasks among the remaining robots. Predictive models like ARIMA and LSTM leverage historical data to forecast future workloads and resource demands (Ayvaz and Alpay, 2021), optimizing production plans and resource scheduling, thereby reducing wait times and increasing production efficiency.

Optimization of resource allocation and conflict management

Optimization of resource allocation and conflict management is crucial for better collaboration among robots (Rahman and Wang, 2018). Market mechanisms and auction algorithms provide dynamic and adaptive methods for resource allocation (Wang et al., 2017a), where agents bid for tasks, with the system dynamically assigning tasks based on bid outcomes. This process, exemplified by Vickrey auctions and double-sided auctions, ensures the most efficient allocation of resources, optimizing resource utilization and reducing operational costs. Such mechanisms are particularly beneficial in logistics scheduling and smart grid management, where the efficient allocation of resources is critical to maintaining system efficiency and responsiveness.

Conflict management and collaborative path planning are essential in multi-robot environments to prevent operational conflicts and ensure seamless collective actions (Chen et al., 2021b). Techniques like multi-robot path planning (Yang et al., 2023), heuristic graph search algorithms (Gammell and Strub, 2021), and collaborative decision trees (Wang and Santoso, 2022) are employed to coordinate movements and tasks, ensuring that robots can operate harmoniously in shared spaces. The integration of leadership-followership dynamics and dynamic role allocation further enhances the adaptability and efficiency of robot teams, critical during complex task execution (Noormohammadi-Asl et al., 2024). Additionally, multi-objective optimization strategies such as genetic algorithms and Particle Swarm Optimization (PSO) address various operational criteria, like path length, energy efficiency, and safety, optimizing paths and strategies to balance complex and competing objectives effectively (Alothaimeen and Arditi, 2020).

Learning and Adaptation in Robotic Networks

Interactive Learning and Knowledge Sharing are pivotal for enhancing the collaborative capabilities of robotic systems. Incorporating advanced learning strategies such as reinforcement learning, semi-supervised learning, and federated learning enables robots to continually adapt and refine their operational strategies based on real-time data. These methodologies are crucial for robots engaged in complex, evolving tasks where adaptability is key. For instance, federated learning allows robots to aggregate decentralized learning experiences, enabling them to benefit from collective insights without compromising data privacy (Martínez Beltrán et al., 2023). This approach is especially valuable in environments requiring high data security, such as healthcare and financial services.

3.4.2 Human-robot Collaboration

Direct control

In the Direct Control model, humans have complete control over robotic systems with very little autonomy on the part of the robot. Operators manipulate these systems using an array of manual controls that might include joysticks, control panels, or sophisticated remote devices, guiding the robots through complex tasks that require precise, real-time human intervention (Brosque et al., 2020).

Technological advancements have significantly enhanced the operator’s ability to monitor and control robots. Key to this are visual and sensory feedback systems that provide a comprehensive understanding of the robot’s operations and its environment. Cameras and vision systems deliver live video feeds, crucial for navigating and manipulating in complex scenarios. Sensor arrays equipped with proximity, tactile, and temperature sensors furnish detailed environmental data, enriching the operator’s situational awareness. Moreover, the integration of AR and VR into control systems marks a significant leap forward. AR enhances real-time video feeds with crucial operational data, while VR can create immersive control environments for both operational and training purposes, allowing operators to interact with remote robots as if they were physically present.

The applications of direct control are diverse and critical, especially in environments where human presence is risky or impractical. For instance, robots operated via direct control are indispensable in hazardous environment operations such as bomb disposal and deep-sea exploration (Xia et al., 2023). These tasks demand a high level of precision and adaptability, where autonomous robots might currently fall short. In remote collaboration scenarios—such as handling hazardous materials in chemical spills or operating in radioactive conditions—operators control robots from a safe distance, supported by advanced communication technologies that ensure reliable and timely command execution.

Moreover, complex teleoperations in sectors like nuclear facility maintenance or space exploration rely heavily on direct control to manage tasks in hostile or remote environments (Su et al., 2023). Similarly, in the medical field, remote surgery and healthcare applications see surgeons performing intricate procedures via robotic systems controlled remotely (Seeliger et al., 2022). This setup not only enhances surgical precision but also extends the reach of specialized medical procedures to remote locations, significantly broadening access to essential healthcare services.

Collaborative autonomy

Robots possess moderate autonomy and can perform tasks alongside humans, sharing workspace and objectives. This level of human-robot interaction is characterized by shared objectives where both parties contribute dynamically to the task at hand. The key to successful collaboration at this level involves sophisticated communication protocols and safety measures to ensure operations are both effective and secure.

In the Collaborative Autonomy model, robots are designed to work alongside humans, sharing both the workspace and objectives. This level of interaction between humans and robots requires sophisticated communication protocols and advanced safety measures, ensuring that collaboration is both effective and secure.

Robots in this category utilize advanced pattern recognition and NLP to understand and respond to human gestures, voice commands, and visual cues. Technologies such as CNNs are used for visual recognition, enabling robots to interpret complex human expressions and actions (Rodrigues et al., 2023). LSTM networks process verbal instructions, allowing robots to engage in responsive and adaptive behaviors that complement human actions.

Safety is paramount in environments where humans and robots collaborate closely. Dynamic safety zones are established around robots to prevent accidents, and emergency stop mechanisms are integrated to respond to potential hazards immediately (Karagiannis et al., 2022). Real-time monitoring systems equipped with anomaly detection algorithms constantly assess the operational area to proactively mitigate risks (Lu et al., 2023), adjusting robot behavior or activating emergency protocols to safeguard human operators.

Force feedback and co-sensing systems play a crucial role in applications that require direct physical interaction between humans and robots. These systems provide intuitive tactile feedback to human operators, enhancing the collaborative experience. For instance, technologies like tactile sensing and biological signal analysis, which monitor indicators such as heart rate and electromyography (EMG), allow robots to adjust their assistance based on real-time assessments of human exertion and fatigue (Lorenzini et al., 2023). This ensures that the robotic assistance is not only effective but also attuned to the physical state of the human partner, optimizing comfort and efficiency in joint tasks.

Applications of collaborative autonomy are diverse and impactful across various sectors. In healthcare, robots assist with surgical procedures, adapting to the movements and requirements of doctors and nurses, thus enhancing precision and reducing the physical strain on medical professionals. In the manufacturing and construction industries, collaborative robots (cobots) work alongside human workers to perform tasks like assembly, painting, or heavy lifting. These cobots are specifically designed to handle burdensome or repetitive tasks, significantly reducing the risk of injury and improving productivity on the production floor. Motion planning algorithms such as Covariant Hamiltonian Optimization for Motion Planning (CHOMP) are also crucial in these settings (Thakar et al., 2022), optimizing robot trajectories to minimize risks associated with high-speed movements and potential collisions in tight workspaces.

Independent autonomy

Robots operate with high to full autonomy, performing tasks independently under predefined guidelines. This level of automation shifts the human role toward monitoring, maintenance, programming, and optimization rather than direct control or active supervision (Selvaggio et al., 2021). This shift is enabled by sophisticated robotic systems designed to self-manage a wide range of operational tasks and adapt to new challenges autonomously.

Robots in this category are equipped with Autonomous Decision Systems that incorporate advanced AI-driven technologies capable of complex decision-making (Huang et al., 2021b). These systems analyze vast amounts of operational data in real-time, allowing robots to handle unexpected situations efficiently. The use of machine learning and deep learning algorithms enables these systems to learn from past experiences, continuously improving their performance and decision-making capabilities over time.

Additionally, Self-Monitoring and Diagnostic Systems are integral to maintaining high operational efficiency and minimizing downtime. These systems, equipped with a variety of sensors and diagnostic algorithms, enable robots to monitor their own health, perform routine diagnostics, and often rectify minor malfunctions independently. This capability significantly reduces the need for human intervention and ensures continuous operation.

In Fully Automated Production Lines, robots autonomously manage everything from material handling to assembly and quality control. This automation allows manufacturing plants to reduce labor costs, minimize human error, and increase consistency and throughput. Additionally, Autonomous Vehicles and Drones operate in complex, real-world environments, undertaking tasks such as deliveries, transportation, and surveillance. These vehicles navigate using advanced sensing and navigation technologies, performing safely and efficiently in areas that might be inaccessible or hazardous for humans, such as disaster-stricken regions or densely populated urban areas. This level of autonomy not only enhances operational capacities but also reshapes industries toward greater efficiency and innovation.

3.4.3 Applications in IBM

Different robotic coordination and human-robot collaboration strategies play an important role in enhancing efficiency, precision, and safety across various tasks in a modular building assembly.

Cobots for precision and ergonomics

In modular building factories, collaborative robots (cobots) excel in screwing, bolting, and welding operations, where their ability to execute repetitive tasks reduces human error and enhances product quality (Fu et al., 2024). These robots are designed to work alongside human operators, taking on physically demanding tasks. For instance, in the assembly of large modular units, cobots perform high-strength welding and bolting tasks under human supervision (Wagner et al., 2020), ensuring that connections are secure and meet stringent safety standards. This collaboration allows human workers to focus on overseeing the quality of work and managing tasks that require nuanced judgment.

Dynamic Task allocation with cobots

The integration of cobots in modular building assembly includes a dynamic task allocation system, where tasks are reassigned in real-time based on the ongoing assessment of project needs and cobot capabilities. For instance, if a specific phase of assembly is behind schedule, cobots can be dynamically redirected to that area to expedite the process and help meet project deadlines (Zhu et al., 2021, Lee et al., 2022a, Fu et al., 2024). Cobots specializes in different tasks, such as electrical, plumbing, and HVAC installations, and can operate simultaneously. They coordinate their activities to avoid interference, ensuring a smooth integration of all systems within the modular units. This flexible approach not only optimizes resource use but also maintains a consistent workflow, accommodating unexpected delays or issues efficiently.

Assembly sequence optimization

In the planning and execution stages of modular construction, genetic algorithms are utilized to optimize the assembly sequence of the building modules (Cao et al., 2022, Hartmann et al., 2023). These algorithms help in planning the order in which various parts should be assembled by the cobots, ensuring that each step is carried out at the optimal time to maximize efficiency and minimize downtime. This is particularly beneficial in settings where multiple assembly lines are operating simultaneously, as it helps to coordinate the actions of both human workers and robots (Kramberger et al., 2022), ensuring that all resources are utilized effectively without interference.

Enhanced collaboration

Agent-based modeling is applied to manage the collaborative efforts of cobots and human workers effectively. In this setup, both cobots and human workers are treated as agents with specific capabilities and roles. Real-time data are used to allocate tasks optimally among agents, considering factors such as task complexity, agent proximity, and current workload. For example, more complex or delicate fitting tasks might be assigned to human workers, while straightforward, repetitive tasks are allocated to cobots (Cai et al., 2023c). This modeling approach not only improves operational efficiency but also ensures that each component of the modular building is assembled with the highest precision.

4 Discussion

Building upon the insights gathered from our NLR, this discussion delves into the future potential of enhancing IBM through the four key modules of AIR: cognition, communication, control, and coordination and collaboration. By examining each focus area, we aim to highlight emerging trends, identify persistent challenges, and propose strategic directions for future research and development. Following these analyses, we synthesize our findings to discuss the overall future potential of applying AIR in IBM. This comprehensive exploration seeks to underscore how advancements in these modules can collectively enhance the efficiency, precision, and scalability of AI-aided robotic systems in off-site assembly processes, ultimately propelling IBM toward more autonomous and intelligent manufacturing paradigms.

4.1 Future research directions in AIR cognition

Despite the great potential, there are several limitations within the current robotic cognition in modular building assembly that hinder optimal integration and operational efficiency, for example, inadequate object recognition capabilities, difficulties in dynamic environmental perception, and limited human-related activity understanding (Chea et al., 2020, M. Tehrani et al., 2023). These challenges underscore the need for specific advancements and serve as the foundation for proposing future directions in research and development.

a) Development of Specialized Data sets: Collaborating with industry stakeholders to gather extensive data on various building modules under different environmental conditions helps create robust, high-quality data sets specifically designed for modular building components. Future research should focus on data set curation involving varied lighting, orientations, and partial occlusions to mimic real-world modular building factories. Additionally, leveraging techniques like synthetic data generation could significantly expand data set variability and volume, providing a more comprehensive training ground for deep learning models.

b) Advanced Sensory Integration for Environmental Perception: Enhancing robots' environmental perception requires integrating advanced sensors that can capture detailed environmental data under diverse conditions. Developing algorithms capable of fusing data from these sensors will enable robots to create more accurate and dynamic representations of their surroundings, helping robots better understand and react to changes, such as the unexpected movement of materials or the presence of human workers, enhancing situational awareness and operational safety.

c) Adaptive Learning Models for Object Recognition: To improve object recognition under varying conditions, research should focus on developing adaptive learning models that can adjust to different lighting and material properties. These models could utilize transfer learning and few-shot learning principles to quickly adapt to new or changing conditions without extensive retraining. Implementing continuous learning systems would allow robots to update their models on-the-fly as they encounter new types of materials or changes in lighting, ensuring consistent recognition accuracy.

d) Behavioral Prediction Algorithms: Enhancing robots’ ability to predict and interpret human actions requires advanced behavioral prediction algorithms that can analyze and anticipate human movements and intentions. This could involve developing more sophisticated human-robot interaction models that use real-time data from video and motion sensors to understand human gestures and patterns of movement. Implementing these predictive models would enable robots to proactively adjust their actions to support and synchronize with human workers, fostering smoother and more effective collaboration.

e) Contextual Dialogue Systems for Improved Communication: To address the challenge of robots processing complex instructions and engaging in meaningful dialog, future efforts should focus on developing contextual dialog systems that can understand and generate responses based on the target modular building. These systems would employ advanced natural language understanding techniques to parse detailed instructions and provide contextually appropriate responses or ask for clarifications. Training these systems with domain-specific data will ensure that communication between human operators and robots is both accurate and effective, facilitating clearer and more productive interactions.

4.2 Future research directions in AIR communication

The precision and timeliness of robotic tasks are critical in a modular building assembly; thus, the communication infrastructure must be robust and highly responsive to ensure seamless robotic operations. Here are challenges in current communication techniques and related actions to take to address them:

a) Implementing 6G network technologies: Current networks may not efficiently adapt to dynamic assembly floor conditions or shifts in production demands, leading to misalignments in robot coordination and errors in assembly precision. Transitioning to 6G networks could drastically improve the adaptability and synchronization of robotic operations, offering enhanced data transfer rates, reduced latency, and greater reliability. This advancement would allow robots to adjust in real-time to environmental changes and inter-machine communications, significantly improving efficiency and reducing errors in modular construction.

b) Advanced AI for predictive maintenance and network management: Existing networks often lack predictive capabilities to foresee and prevent maintenance issues, leading to frequent unplanned downtimes and inefficiencies. Moreover, current network management systems might not dynamically adjust to fluctuating task priorities and workflow changes, leading to inefficiencies and potential project bottlenecks. Integrating sophisticated AI algorithms to continuously monitor robotic systems could significantly reduce downtime by preemptively identifying potential failures and automating maintenance schedules. AI-driven network management could dynamically allocate bandwidth and prioritize machine tasks based on real-time assembly needs, enhancing productivity, and preventing delays.

c) Enhancing security with cryptography and blockchain: In modular building assembly, current communication systems may not provide adequate security measures to protect sensitive information, posing risks of data breaches or intellectual property theft. Developing specialized cryptographic protocols is essential to secure sensitive data transfers across factory networks. Furthermore, implementing blockchain technology could bolster the traceability and security of the assembly process, ensuring that each component and operation is verifiable and safeguarded against tampering or intellectual property theft. These measures would protect valuable assets and maintain the integrity of the modular building process.

d) Optimizing assembly processes with Iot integration: There is often an underutilization of the data generated by IoT devices in current systems, which can significantly optimize assembly processes. Leveraging the underutilized data from IoT devices in modular building assembly lines could create a fully interconnected environment where all components communicate seamlessly. AI could analyze this data to optimize workflows, determine optimal assembly sequences, and adjust processes in real-time based on feedback from the assembly floor. This smart assembly system would streamline operations, improve resource allocation, boost energy efficiency, and shorten overall assembly time, leading to more sustainable practices and reduced costs.

4.3 Future research directions in AIR control

Precise and efficient control is critical in a modular building assembly. For enhancing current control algorithms, here are some future potentials to augment the control and functionality of robots in IBA:

a) Adaptive control systems with real-time analytics: Current industrial robot systems often excel in stable, predictable environments but struggle with dynamic changes and unexpected variations such as irregular component sizes or mismatched parts. This limited flexibility can lead to bottlenecks and decreased assembly efficiency. Future innovations could involve the development of next-generation adaptive control systems that utilize advanced sensors and real-time data analytics. These systems would enable robots to dynamically adjust their strategies based on immediate feedback from the assembly environment. For example, using AI-driven predictive models, robots could anticipate and mitigate potential assembly issues before they occur, thereby maintaining continuous production flow and reducing waste.

b) Machine learning for improving control: Robots typically perform well within the parameters for which they were programmed, but they can lack the precision needed for tasks requiring adaptation to minute variations in material properties or assembly conditions. By incorporating machine learning algorithms directly into robotic control systems, future robots could continuously learn and improve from each task performed. This capability would allow for finer adjustments in real-time, leading to unprecedented levels of precision. Robots could, for instance, adjust their handling and assembly techniques based on the specific characteristics of each component they work with, leading to tighter fits and better overall assembly quality.

c) Advanced control automation: Most robotic systems in modular assembly rely heavily on operator input and pre-programmed routines, limiting their ability to make autonomous decisions based on situational awareness. With advancements in AI and cognitive computing, robots could be equipped with the capability to make complex decisions independently. For example, robots could decide how best to manipulate components based on real-time scans, detect and correct errors autonomously, and optimize paths and placement techniques on the fly. This level of autonomy would drastically reduce the need for human intervention and streamline the entire assembly process.

d) Robot trajectory learning from experts: Current industry robots often lack the nuanced skills possessed by human operators when assemble the irregular building modules during IBA. Utilizing imitation learning where robots could learn operational skills from human demonstrations, capturing actions via sensors and cameras. In this way, robots acquire practical skills from expert workers, enhancing their capability to perform specialized tasks such as precise component placement and specific assembly techniques.

4.4 Future research directions in AIR coordination and collaboration

In the context of in-factory modular building assembly, coordination and collaboration between robots and human workers face unique challenges that require innovative solutions to enhance efficiency and safety. Here’s a detailed exploration of these challenges with proposed technological interventions:

a) Safer human-robot collaboration: Safety issues and technical restrictions currently inhibit close physical interactions between human workers and robots, which limits the potential benefits of collaborative creativity and flexibility in the robotic assembly process. Innovations in safety technologies, including enhanced sensing technologies and smarter safety protocols that detect human presence and intentions in real-time, could facilitate closer interactions between humans and robots. There is a need to develop cobots equipped with pressure-sensitive skin, machine vision, spatial awareness and emergency response algorithms that halt operations instantly when a potential human collision is detected. Implementing these technologies would allow for safer physical proximity between humans and robots, thereby facilitating direct collaboration on tasks such as joint lifting and precise positioning of heavy modules, enhancing the assembly process’s efficiency and safety.

b) Dynamic task allocation: The current systems struggle with dynamically allocating tasks in real-time based on the evolving needs of the project and the varying capabilities of robots and human workers. This results in sub-optimal workflow management and can lead to increased project timelines and costs. The application of agent-based modeling and AI-driven resource management systems could revolutionize dynamic task allocation. These systems would allow for real-time reassignment of tasks and adjustment of resources, optimizing workflow based on the immediate needs of the project and the specific capabilities of each agent. This approach ensures that tasks are handled by the most suitable agent (human or robot), maximizing efficiency and productivity.

c) Optimizing assembly sequences: Current methods for planning the assembly sequence in modular construction are often sub-optimal, leading to material wastage and increased labor cost. Utilizing genetic algorithms and advanced simulation techniques like digital twins could significantly enhance the planning of assembly sequences by simulating and optimizing the assembly process in virtual environment before physical implementation. These digital twins would model different assembly scenarios to identify the most efficient sequence and predict potential issues, enabling proactive adjustments that minimize waste and reduce costs.

d) Advanced collaborative system: As modular construction projects increase in complexity, the integration of sophisticated collaborative systems that facilitate seamless robot-to-robot and human-to-robot interactions becomes crucial. A unified control platform that integrates all robotic and human-interface devices into a single user-friendly system, allowing human operators to monitor and control various aspects of the robotic assembly process from a central hub. These systems would incorporate advanced communication networks that support high-speed data exchange and real-time decision-making, enhancing the synchronization of tasks across the assembly line. Such systems would not only speed up the construction process but also increase the adaptability of production lines to handle various design specifications and project changes without extensive downtime for reconfiguration.

4.5 Future research directions for AIR in IBM

Looking forward, the modular building industry encounters both challenges and opportunities with the deeper integration of advanced AIR:

a) Economic and accessibility challenges: The high initial cost and complexity of robotic systems often limit their adoption, particularly within small to medium-sized enterprises focused on modular building assembly. The discussed techniques in control and coordination offer solutions to this challenge:

• Modular Robotic Systems from the control section suggests that developing robots with standardized, interchangeable modules can reduce costs and enhance customization. This would allow companies to invest in robots incrementally, tailoring systems to their specific needs without the upfront expense of complete systems.

• Optimized Resource Allocation strategies from the coordination and collaboration section can streamline operations and reduce wastage, effectively lowering operational costs and making the technology more accessible. Techniques like linear programming and network flow analysis ensure that resources are used efficiently, maximizing output while minimizing input costs.

b) Sustainability concerns: Robotic systems are typically associated with high energy demands and significant resource utilization, which may conflict with global sustainability goals. The sections on communication and collaboration provide relevant advancements:

• Energy-efficient Communication Protocols discussed in the communication section, such as advanced network technologies that reduce data transmission energy costs, can be crucial. Implementing these would decrease the overall energy footprint of robotic operations.

• Dynamic Task Allocation within the collaboration section promotes operational efficiency by ensuring that robots operate at peak efficiency, reducing energy usage. Methods like multi-robot path planning optimize the movements of robots to prevent redundant or inefficient paths that increase energy consumption.

c) Advancements in AI Integration: As AI technology continues to evolve, its integration with robotic systems in modular building assembly is anticipated to deepen, as discussed across all sections, particularly cognition:

• Advanced Environmental Perception technologies enable robots to understand and interact with their environment more effectively, reducing errors and rework. This not only enhances operational efficiency but also reduces material waste by ensuring tasks are done correctly the first time.

• Intelligent Interaction Protocols from the communication and cognition sections allow robots to anticipate human actions and adapt to changes quickly, ensuring that the assembly process is not only faster but also more adaptable to varying conditions on the factory floor and different modular building design.

• Learning and Adaptation in Robotic Networks, leveraging machine learning and adaptive algorithms, enables continuous improvement in robotic operations. Robots equipped with these capabilities can adapt to new assembly processes, learn from past mistakes, and optimize their actions for future tasks, driving both innovation and efficiency in building assembly.

• Next-generation AI like LLM and generative AI enable smoother cooperation within robots and between humans and robots. It allows more intelligent robotic control, optimized coordination, and work sequence planning based on the unique characteristics of each task. Moreover, customized LLM-based AI assistants would help build a harmonious human-robot collaboration.

By addressing these challenges through the discussed technological advancements, the modular building industry can achieve higher levels of automation, precision, and efficiency.

5 Conclusions

This paper has provided an in-depth exploration of AIR within the context of IBM, with a particular focus on in-factory modular building assembly. We began by analyzing the cognitive capabilities of AI systems, which enable robots to make decisions akin to human reasoning. In addition to these cognitive functions, we thoroughly discussed communication techniques essential for effective interaction within robotic systems, both among robots and between robots and humans, underscoring the importance of seamless connectivity as the backbone of collaborative operations.

We then examined control mechanisms that ensure the precise and effective functioning of robotic operations, crucial for the accurate execution of complex tasks. Building on these aspects, our exploration of coordination and collaboration technologies demonstrated how these elements are integrated to manage the interdependencies and collective actions of robots and humans within the production environment. This holistic integration of cognition, communication, control, coordination, and collaboration forms a robust framework that significantly enhances efficiency, quality, and scalability in modular building assembly. The integration of these technologies represents a significant shift toward autonomous systems that can execute complex, collaborative tasks with efficiency and precision. This advancement is not just technological; it fundamentally transforms manufacturing approaches, enabling smarter, faster, and more adaptable building processes.

Overall, this analysis and the ensuing discussion provide a solid foundation for researchers, scholars, and practitioners in the modular building sector to anchor their ongoing and future projects. It outlines a framework that not only encapsulates comprehensive current techniques and applications but also sets the stage for forthcoming innovations that drive the industry toward higher levels of autonomy and efficiency. Despite the comprehensive scope of this review, it is important to acknowledge certain limitations that may influence the breadth and depth of the findings presented. One notable limitation is the restriction of the literature review to articles published in English. This exclusion of potentially valuable insights from non-English sources may overlook emerging innovations and diverse perspectives in the field of AIR for modular building assembly. It could benefit from a broader selection of publications to encompass a more global and multidisciplinary perspective. Additionally, the review predominantly synthesizes published theories and reported applications, which inherently lacks empirical validation specific to the use of advanced AIR in modular construction. Future research could enhance the robustness of these findings by incorporating empirical studies that test and verify the proposed technologies and methods in real-world settings, thereby solidifying the foundation for advancing the field.

References

[1]

Acheampong F A, Nunoo-Mensah H, Chen W, (2021). Transformer models for text-based emotion detection: A review of bert-based approaches. Artificial Intelligence Review, 54( 8): 5789–5829

[2]

Ahmadian Fard Fini A, Maghrebi M, Forsythe P J, Waller T S, (2022). Using existing site surveillance cameras to automatically measure the installation speed in prefabricated timber construction. Engineering, Construction, and Architectural Management, 29( 2): 573–600

[3]

AijazA (2021). Infrastructure-less wireless connectivity for mobile robotic systems in logistics: Why bluetooth mesh networking is important? 2021 26th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA). p. 1–8

[4]

AlbaniDIJsselmuidenJHakenRTrianniV (2017). Monitoring and mapping with robot swarms for agricultural applications. In: 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS). p. 1–6

[5]

AlbertoNTSkuricAJosephLPadoisVDaneyD (2023). Model predictive control for robots adapting their task space motion online

[6]

Alonso-Mora J, Baker S, Rus D, (2017). Multi-robot formation control and object transport in dynamic environments via constrained optimization. International Journal of Robotics Research, 36( 9): 1000–1021

[7]

AlothaimeenIArditiD (2020). Overview of multi-objective optimization approaches in construction project management. In: Vakhania N and Werner F, eds. Multicriteria Optimization - Pareto-Optimality and Threshold-Optimality IntechOpen

[8]

Anane W, Iordanova I, Ouellet-Plamondon C, (2022). Modular robotic prefabrication of discrete aggregations driven by bim and computational design. Procedia Computer Science, 200: 1103–1112

[9]

Aonty S S, Deb K, Sarma M S, Dhar P K, Shimamura T, (2023). Multi-person pose estimation using group-based convolutional neural network model. IEEE Access: Practical Innovations, Open Solutions, 11: 42343–42360

[10]

Apolinarska A A, Pacher M, Li H, Cote N, Pastrana R, Gramazio F, Kohler M, (2021). Robotic assembly of timber joints using reinforcement learning. Automation in Construction, 125: 103569

[11]

Artetxe E, Barambones O, Calvo I, Fernández-Bustamante P, Martin I, Uralde J, (2023). Wireless technologies for industry 4.0 applications. Energies, 16( 3): 1349

[12]

Asghari V, Wang Y, Biglari A J, Hsu S C, Tang P, (2022). Reinforcement learning in construction engineering and management: A review. Journal of Construction Engineering and Management, 148( 11): 03122009

[13]

Augugliaro F, Lupashin S, Hamer M, Male C, Hehn M, Mueller M W, Willmann J S, Gramazio F, Kohler M, D’Andrea R, (2014). The flight assembled architecture installation: Cooperative construction with flying machines. IEEE Control Systems, 34( 4): 46–64

[14]

Ayvaz S, Alpay K, (2021). Predictive maintenance system for production lines in manufacturing: A machine learning approach using iot data in real-time. Expert Systems with Applications, 173: 114598

[15]

BackSKimJKangRChoiSLeeK (2020). Segmenting unseen industrial components in a heavy clutter using rgb-d fusion and synthetic data. In: 2020 IEEE International Conference on Image Processing (ICIP): 828–832

[16]

Baduge S K, Thilakarathna S, Perera J S, Arashpour M, Sharafi P, Teodosio B, Shringi A, Mendis P, (2022). Artificial intelligence and smart vision for building and construction 4.0: Machine and deep learning methods and applications. Automation in Construction, 141: 104440

[17]

Bae J, Han S, (2021). Vision-based inspection approach using a projector-camera system for off-site quality control in modular construction: Experimental investigation on operational conditions. Journal of Computing in Civil Engineering, 35( 5): 04021012

[18]

Baek J, Kwon W, (2020). Practical adaptive sliding-mode control approach for precise tracking of robot manipulators. Applied Sciences, 10( 8): 2909

[19]

Bai R, Wang M, Zhang Z, Lu J, Shen F, (2023). Automated construction site monitoring based on improved yolov8-seg instance segmentation algorithm. IEEE Access: Practical Innovations, Open Solutions, 11: 139082–139096

[20]

Bao L, Han C, Li G, Chen J, Wang W, Yang H, Huang X, Guo J, Wu H, (2023). Flexible electronic skin for monitoring of grasping state during robotic manipulation. Soft Robotics, 10( 2): 336–344

[21]

BazhanovAYudinDPorkhaloVKarikovE (2016). Control system of robotic complex for constructions and buildings printing. In: 2016 International Conference on Information and Digital Technologies (IDT). p. 23–31

[22]

Bencak P, Hercog D, Lerher T, (2022). Indoor positioning system based on bluetooth low energy technology and a nature-inspired optimization algorithm. Electronics, 11( 3): 308

[23]

Bhatt P M, Malhan R K, Shembekar A V, Yoon Y J, Gupta S K, (2020). Expanding capabilities of additive manufacturing through use of robotics technologies: A survey. Additive Manufacturing, 31: 100933

[24]

Boccella A R, Centobelli P, Cerchione R, Murino T, Riedel R, (2020). Evaluating centralized and heterarchical control of smart manufacturing systems in the era of Industry 4.0. Applied Sciences, 10( 3): 755

[25]

BowmasterJRankinJ (2019). A research roadmap for off-site construction: automation and robotics. Modular and Offsite Construction (MOC) Summit Proceedings, 173–180

[26]

BrosqueCGalballyEKhatibOFischerM (2020). Human-robot collaboration in construction: opportunities and challenges. In: 2020 International Congress on Human-Computer Interaction, Optimization and Robotic Applications (HORA). p. 1–8

[27]

Brosque C, Hawkins J T, Dong T, Örn J, Fischer M, (2023). Comparison of on-site and off-site robot solutions to the traditional framing and drywall installation tasks. Construction Robotics, 7( 1): 19–39

[28]

Buchanan C, Gardner L, (2019). Metal 3d printing in construction: A review of methods, research, applications, opportunities and challenges. Engineering Structures, 180: 332–348

[29]

Cai J, Chen J, Hu Y, Li S, He Q, (2023a). Digital twin for healthy indoor environment: a vision for the post-pandemic era. Frontiers of Engineering Management, 10( 2): 300–318

[30]

Cai J, Du A, Liang X, Li S, (2023b). Prediction-based path planning for safe and efficient human–robot collaboration in construction via deep reinforcement learning. Journal of Computing in Civil Engineering, 37( 1): 04022046

[31]

Cai M, Liang R, Luo X, Liu C, (2023c). Task allocation strategies considering task matching and ergonomics in the human-robot collaborative hybrid assembly cell. International Journal of Production Research, 61( 21): 7213–7232

[32]

CanoLopes GFerreiraMdaSilva Simões ALunaColombini E (2018). Intelligent control of a quadrotor with proximal policy optimization reinforcement learning. 2018 Latin American Robotic Symposium, 2018 Brazilian Symposium on Robotics (SBR) and 2018 Workshop on Robotics in Education (WRE): 503–508

[33]

Cao J, Zhao P, Liu G, (2022). Optimizing the production process of modular construction using an assembly line-integrated supermarket. Automation in Construction, 142: 104495

[34]

Cao M Y, Laws S, Baena F R, (2021). Six-axis force/torque sensors for robotics applications: A review. IEEE Sensors Journal, 21( 24): 27238–27251

[35]

Carrara G R, Burle L M, Medeiros D S V, de Albuquerque C V N, Mattos D M F, (2020). Consistency, availability, and partition tolerance in blockchain: A survey on the consensus mechanism over peer-to-peer networking. Annales des Télécommunications, 75( 3–4): 163–174

[36]

CattariNPiazzaRD’AmatoRFidaBCarboneMCondinoSCutoloFFerrariV (2020). Towards a wearable augmented reality visor for high-precision manual tasks. In: 2020 IEEE International Symposium on Medical Measurements and Applications (MeMeA): 1–6

[37]

Chea C P, Bai Y, Pan X, Arashpour M, Xie Y, (2020). An integrated review of automation and robotic technologies for structural prefabrication and construction. Transportation Safety and Environment, 2( 2): 81–96

[38]

Cheltha J N, Sharma C, Dadheech P, Goyal D, (2024). Overcoming occlusion challenges in human motion detection through advanced deep learning techniques. International Journal of Intelligent Systems and Applications in Engineering, 12: 497–513

[39]

Chen C, Wang T, Li D, Hong J, (2020). Repetitive assembly action recognition based on object detection and pose estimation. Journal of Manufacturing Systems, 55: 325–333

[40]

Chen G, Luo N, Liu D, Zhao Z, Liang C, (2021a). Path planning for manipulators based on an improved probabilistic roadmap method. Robotics and Computer-integrated Manufacturing, 72: 102196

[41]

Chen J, Kira Z, Cho Y K, (2019). Deep learning approach to point cloud scene understanding for automated scan to 3D reconstruction. Journal of Computing in Civil Engineering, 33( 4): 04019027

[42]

Chen X, Guhl J, (2018). Industrial robot control with object recognition based on deep learning. Procedia CIRP, 76: 149–154

[43]

Chen Y, Rosolia U, Ames A D, (2021b). Decentralized task and path planning for multi-robot systems. IEEE Robotics and Automation Letters, 6( 3): 4337–4344

[44]

Chen Z, Wang J, Liu J, Khan K, (2021c). Seismic behavior and moment transfer capacity of an innovative self-locking inter-module connection for modular steel building. Engineering Structures, 245: 112978

[45]

Cheng J, Yang Y, Zou X, Zuo Y, (2024). 5G in manufacturing: a literature review and future research. International Journal of Advanced Manufacturing Technology, 131( 11): 5637–5659

[46]

Cheng Z, Tang S, Liu H, Lei Z, (2023). Digital technologies in offsite and prefabricated construction: Theories and applications. Buildings, 13( 1): 163

[47]

Choi Y, Kim D, Hwang S, Kim H, Kim N, Han C, (2017). Dual-arm robot motion planning for collision avoidance using b-spline curve. International Journal of Precision Engineering and Manufacturing, 18( 6): 835–843

[48]

Dawod M, Hanna S, (2019). BIM-assisted object recognition for the on-site autonomous robotic assembly of discrete structures. Construction Robotics, 3: 69–81

[49]

De Beelde B, Plets D, Joseph W, (2021). Wireless sensor networks for enabling smart production lines in industry 4.0. Applied Sciences, 11( 23): 11248

[50]

de Gea Fernández J, Mronga D, Günther M, Knobloch T, Wirkus M, Schröer M, Trampler M, Stiene S, Kirchner E, Bargsten V, Bänziger T, Teiwes J, Krüger T, Kirchner F, (2017). Multimodal sensor-based whole-body control for human–robot collaboration in industrial settings. Robotics and Autonomous Systems, 94: 102–119

[51]

Deng J, Pang G, Zhang Z, Pang Z, Yang H, Yang G, (2019). CGAN based facial expression recognition for human-robot interaction. IEEE Access : Practical Innovations, Open Solutions, 7: 9848–9859

[52]

DocekalJRozlivekJMatasJHoffmannM (2022). Human keypoint detection for close proximity human-robot interaction. In: 2022 IEEE-RAS 21st International Conference on Humanoid Robots (Humanoids). p. 450–457

[53]

Dong B, Prakash V, Feng F, O’Neill Z, (2019). A review of smart building sensing system for better indoor environment control. Energy and Building, 199: 29–46

[54]

Dönmez E, Kocamaz A F, (2020). Design of mobile robot control infrastructure based on decision trees and adaptive potential area methods. Iranian Journal of Science and Technology. Transaction of Electrical Engineering, 44( 1): 431–448

[55]

Du J, Jing H, Castro-Lacouture D, Sugumaran V, (2019). Multi-agent simulation for managing design changes in prefabricated construction projects. Engineering, Construction, and Architectural Management, 27( 1): 270–295

[56]

DymoraPLichaczGMazurekM (2023). Performance analysis of a real-time data warehouse system implementation based on open-source technologies: 63–73

[57]

Ekanayake E M A C, Shen G Q P, Kumaraswamy M M, Owusu E K, Saka A B, (2021). Modeling supply chain resilience in industrialized construction: A hong kong case. Journal of Construction Engineering and Management, 147( 11): 05021009

[58]

EtzDFrühwirthTIsmailAKastnerW (2018). Simplifying functional safety communication in modular, heterogeneous production lines. In: 2018 14th IEEE International Workshop on Factory Communication Systems (WFCS). p. 1–4

[59]

Eversmann P, Gramazio F, Kohler M, (2017). Robotic prefabrication of timber structures: towards automated large-scale spatial assembly. Construction Robotics, 1( 1–4): 49–60

[60]

Evjemo L D, Gjerstad T, Grøtli E I, Sziebig G, (2020). Trends in smart manufacturing: role of humans and industrial robots in smart factories. Current Robotics Reports, 1( 2): 35–41

[61]

Ferdous W, Bai Y, Ngo T D, Manalo A, Mendis P, (2019). New advancements, challenges and opportunities of multi-storey modular buildings – A state-of-the-art review. Engineering Structures, 183: 883–893

[62]

Ferrentino E, Savino H J, Franchi A, Chiacchio P, (2023). A dynamic programming framework for optimal planning of redundant robots along prescribed paths with kineto-dynamic constraints. IEEE Transactions on Automation Science and Engineering, 21( 4): 1–14

[63]

Fu Y, Chen J, Lu W, (2024). Human-robot collaboration for modular construction manufacturing: review of academic research. Automation in Construction, 158: 105196

[64]

Gammell J D, Strub M P, (2021). Asymptotically optimal sampling-based motion planning methods. Annual Review of Control, Robotics, and Autonomous Systems, 4( 1): 295–318

[65]

GawelABlumHPankertJKrämerKBartolomeiLErcanSFarshidianFChliMGramazioFSiegwartRHutterMSandyT (2019). A fully-integrated sensing and control system for high-accuracy mobile robotic building construction. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): 2300–2307

[66]

Geng R, Li M, Hu Z, Han Z, Zheng R, (2022). Digital twin in smart manufacturing: remote control and virtual machining using VR and AR technologies. Structural and Multidisciplinary Optimization, 65( 11): 321

[67]

GentileA FMacrìDGrecoEForestieroA (2023). Privacy-oriented architecture for building automatic voice interaction systems in smart environments in disaster recovery scenarios. In: 2023 International Conference on Information and Communication Technologies for Disaster Management (ICT-DM): 1–8

[68]

Ginigaddara B, Perera S, Feng Y, Rahnamayiezekavat P, Kagioglou M, (2024). Industry 4.0 driven emerging skills of offsite construction: A multi-case study-based analysis. Construction Innovation, 24( 3): 747–769

[69]

GolecMGudlinMGregurićPHegedićM (2023). Development of testing system for the application of artificial intelligence in quality. In: 2023 22nd International Symposium INFOTEH-JAHORINA (INFOTEH): 1–6

[70]

Guo H, Zhang Z, Yu R, Sun Y, Li H, (2023). Action recognition based on 3d skeleton and lstm for the monitoring of construction workers’ safety harness usage. Journal of Construction Engineering and Management, 149( 4): 04023015

[71]

Guo Y, Freer D, Deligianni F, Yang G Z, (2022). Eye-tracking for performance evaluation and workload estimation in space telerobotic training. IEEE Transactions on Human–Machine Systems, 52( 1): 1–11

[72]

Gusmao Brissi S, Wong Chong O, Debs L, Zhang J, (2022). A review on the interactions of robotic systems and lean principles in offsite construction. Engineering, Construction, and Architectural Management, 29( 1): 383–406

[73]

Hack N, Dörfler K, Walzer A N, Wangler T, Mata-Falcón J, Kumar N, Buchli J, Kaufmann W, Flatt R J, Gramazio F, Kohler M, (2020). Structural stay-in-place formwork for robotic in situ fabrication of non-standard concrete structures: A real scale architectural demonstrator. Automation in Construction, 115: 103197

[74]

HannunACaseCCasperJCatanzaroBDiamosGElsenEPrengerRSatheeshSSenguptaSCoatesANgAY (2014). Deep speech: Scaling up end-to-end speech recognition

[75]

Hao K, Zhao J, Wang B, Liu Y, Wang C, (2021). The application of an adaptive genetic algorithm based on collision detection in path planning of mobile robots. Computational Intelligence and Neuroscience, 2021( 1): 5536574

[76]

Hartmann V N, Orthey A, Driess D, Oguz O S, Toussaint M, (2023). Long-horizon multi-robot rearrangement planning for construction assembly. IEEE Transactions on Robotics, 39( 1): 239–252

[77]

Hasanzadeh S, Esmaeili B, Dodd M D, (2017). Measuring the impacts of safety knowledge on construction workers’ attentional allocation and hazard detection using remote eye-tracking technology. Journal of Management Engineering, 33( 5): 04017024

[78]

HeJZhangJLiuJFuX (2022). A ros2-based framework for industrial automation systems. In: 2022 2nd International Conference on Computer, Control and Robotics (ICCCR): 98–102

[79]

He R, Li M, Gan V J L, Ma J, (2021). BIM-enabled computerized design and digital fabrication of industrialized buildings: A case study. Journal of Cleaner Production, 278: 123505

[80]

Hou L, Tan Y, Luo W, Xu S, Mao C, Moon S, (2022). Towards a more extensive application of off-site construction: A technological review. International Journal of Construction Management, 22( 11): 2154–2165

[81]

Hou S, Dong B, Wang H, Wu G, (2020). Inspection of surface defects on stay cables using a robot and transfer learning. Automation in Construction, 119: 103382

[82]

Hu D, Gan V J L, Yin C, (2023). Robot-assisted mobile scanning for automated 3d reconstruction and point cloud semantic segmentation of building interiors. Automation in Construction, 152: 104949

[83]

Huang S K, Wang W J, Sun C H, (2021a). A path planning strategy for multi-robot moving with path-priority order based on a generalized voronoi diagram. Applied Sciences, 11( 20): 9650

[84]

HuangXLiTChenW (2022). PCB assembly component recognition based on semantic segmentation and attention mechanism. International Core Journal of Engineering, 8:

[85]

Yamaguchi T, Kashiwagi T, Arie T, Akita S, Takei K, (2019). Human–Like electronic skin–integrated soft robotic hand. Advanced Intelligent Systems, 1: 1900018

[86]

HuangZFanY (2018). Life cycle carbon emissions of industrialized buildings based on bim. In: International Conference on Construction and Real Estate Management 2018: 95–103

[87]

Huang Z, Shen Y, Li J, Fey M, Brecher C, (2021b). A survey on ai-driven digital twins in industry 4.0: Smart manufacturing and advanced robotics. Sensors, 21( 19): 6340

[88]

Muhammad I, Ying K, Nithish SM, Jin Y, Zhao X, Cheah CC, (2021). Robot-assisted object detection for construction automation: Data and information-driven approach. IEEE/ASME Transactions on Mechatronics, 26( 6): 2845–2856

[89]

Jiang Y, Li M, Guo D, Wu W, Zhong R Y, Huang G Q, (2022). Digital twin-enabled smart modular integrated construction system for on-site assembly. Computers in Industry, 136: 103594

[90]

Jose K, Pratihar D K, (2016). Task allocation and collision-free path planning of centralized multi-robots system for industrial plant inspection using heuristic methods. Robotics and Autonomous Systems, 80: 34–42

[91]

Jung B, You H, Lee S, (2023). Anomaly candidate extraction and detection for automatic quality inspection of metal casting products using high-resolution images. Journal of Manufacturing Systems, 67: 229–241

[92]

KaiserBStrobelTVerlA (2021). Human-robot collaborative workflows for reconfigurable fabrication systems in timber prefabrication using augmented reality. In: 2021 27th International Conference on Mechatronics and Machine Vision in Practice (M2VIP): 576–581

[93]

Kalinowska A, Pilarski P M, Murphey T D, (2023). Embodied communication: how robots and people communicate through physical interaction. Annual Review of Control, Robotics, and Autonomous Systems, 6( 1): 205–232

[94]

Karagiannis P, Kousi N, Michalos G, Dimoulas K, Mparis K, Dimosthenopoulos D, Tokçalar Ö, Guasch T, Gerio G P, Makris S, (2022). Adaptive speed and separation monitoring based on switching of safety zones for effective human robot collaboration. Robotics and Computer-integrated Manufacturing, 77: 102361

[95]

KarthikSShararehKBehzadR (2020). Modular construction vs. traditional construction: advantages and limitations: A comparative study. Creative Construction e-Conference 2020: 11–19

[96]

Kattepur A, Rath H K, Mukherjee A, Simha A, (2018). Distributed optimization framework for industry 4.0 automated warehouses. EAI Endorsed Transactions on Industrial Networks and Intelligent Systems, 5( 15): e2

[97]

Kheradmandi N, Mehranfar V, (2022). A critical review and comparative study on image segmentation-based techniques for pavement crack detection. Construction & Building Materials, 321: 126162

[98]

Kim J, Chung D, Kim Y, Kim H, (2022). Deep learning-based 3d reconstruction of scaffolds using a robot dog. Automation in Construction, 134: 104092

[99]

Kim P, Chen J, Cho Y K, (2018). SLAM-driven robotic mapping and registration of 3d point clouds. Automation in Construction, 89: 38–48

[100]

Kolakowska E, Smith S F, Kristiansen M, (2014). Constraint optimization model of a scheduling problem for a robotic arm in automatic systems. Robotics and Autonomous Systems, 62( 2): 267–280

[101]

Koyama K, Shimojo M, Senoo T, Ishikawa M, (2018). High-speed high-precision proximity sensor for detection of tilt, distance, and contact. IEEE Robotics and Automation Letters, 3( 4): 3224–3231

[102]

KrambergerAKunicAIturrateISlothCNaboniRSchletteC (2022). Robotic assembly of timber structures in a human-robot collaboration setup. Frontiers in Robotics and AI, 8:

[103]

Ku Y, Yang J, Fang H, Xiao W, Zhuang J, (2021). Deep learning of grasping detection for a robot used in sorting construction and demolition waste. Journal of Material Cycles and Waste Management, 23: 84–95

[104]

Lam T F, Blum H, Siegwart R, Gawel A, (2022). SL sensor: an open-source, real-time and robot operating system-based structured light sensor for high accuracy construction robotic applications. Automation in Construction, 142: 104424

[105]

Lee D, Lee S, Masoud N, Krishnan M S, Li V C, (2022a). Digital twin-driven deep reinforcement learning for adaptive task allocation in robotic construction. Advanced Engineering Informatics, 53: 101710

[106]

Lee S, Jeong M, Cho C S, Park J, Kwon S, (2022b). Deep learning-based pc member crack detection and quality inspection support technology for the precise construction of osc projects. Applied Sciences, 12( 19): 9810

[107]

Lerlertpakdee P, Jafarpour B, Gildin E, (2014). Efficient production optimization with flow-network models. SPE Journal, 19( 6): 1083–1095

[108]

LiBZhangWLiYTianWWangC (2022). Positional accuracy improvement of an industrial robot using feedforward compensation and feedback control. Journal of Dynamic Systems, Measurement, and Control, 144:

[109]

Li C, Chrysostomou D, Yang H, (2023a). A speech-enabled virtual assistant for efficient human–robot interaction in industrial environments. Journal of Systems and Software, 205: 111818

[110]

Li R, Qiao H, (2019). A survey of methods and strategies for high-precision robotic grasping and assembly tasks—Some new trends. IEEE/ASME Transactions on Mechatronics, 24: 2718–2732

[111]

Li W, Han Y, Wu J, Xiong Z, (2020). Collision detection of robots based on a force/torque sensor at the bedplate. IEEE/ASME Transactions on Mechatronics, 25( 5): 2565–2573

[112]

LiXJiangXLiuY (2021). Construction robot localization system based on multi-sensor fusion and 3d construction drawings. In: 2021 IEEE International Conference on Robotics and Biomimetics (ROBIO): 814–819

[113]

Li Y, Yang G, Su Z, Li S, Wang Y, (2023b). Human activity recognition based on multienvironment sensor data. Information Fusion, 91: 47–63

[114]

Li Z, Deng J, Lu R, Xu Y, Bai J, Su C Y, (2016). Trajectory-tracking control of mobile robot systems incorporating neural-dynamic optimized model predictive approach. IEEE Transactions on Systems, Man, and Cybernetics. Systems, 46( 6): 740–749

[115]

Li Z, Xu B, Wu D, Zhao K, Chen S, Lu M, Cong J, (2023c). A yolo-ggcnn based grasping framework for mobile robots in unknown environments. Expert Systems with Applications, 225: 119993

[116]

Liang C J, McGee W, Menassa C C, Kamat V R, (2022). Real-time state synchronization between physical construction robots and process-level digital twins. Construction Robotics, 6( 1): 57–73

[117]

Liang C J, Wang X, Kamat V R, Menassa C C, (2021). Human–robot collaboration in construction: Classification and research trends. Journal of Construction Engineering and Management, 147( 10): 03121006

[118]

Lin C T, Lu H J, (2024). An intelligent product-driven manufacturing system using data distribution service. IEEE Access: Practical Innovations, Open Solutions, 12: 16447–16461

[119]

Lin F, Chen Y, Zhao Y, Wang S, (2019). Path tracking of autonomous vehicle based on adaptive model predictive control. International Journal of Advanced Robotic Systems, 16( 5): 1–1

[120]

Liu C, Sepasgozar S ME, Shirowzhan S, Mohammadi G, (2021a). Applications of object detection in modular construction based on a comparative evaluation of deep learning algorithms. Construction Innovation, 22: 141–159

[121]

Liu Y, Habibnezhad M, Jebelli H, (2021b). Brain-computer interface for hands-free teleoperation of construction robots. Automation in Construction, 123: 103523

[122]

Liu Z, Liu Q, Xu W, Wang L, Zhou Z, (2022). Robot learning towards smart robotic manufacturing: A review. Robotics and Computer-integrated Manufacturing, 77: 102360

[123]

López J, Zalama E, Gómez-García-Bermejo J, (2022). A simulation and control framework for agv based transport systems. Simulation Modelling Practice and Theory, 116: 102430

[124]

Lorenzini M, Lagomarsino M, Fortini L, Gholami S, Ajoudani A, (2023). Ergonomic human-robot collaboration in industry: A review. Frontiers in Robotics and AI, 9: 1–1

[125]

Lotfi M, Osório GJ, Javadi MS, Ashraf A, Zahran M, Samih G, Catalão JPS, (2021). A dijkstra-inspired graph algorithm for fully autonomous tasking in industrial applications. IEEE Transactions on Industry Applications, 57: 5448–5460

[126]

LourençoL LOliveiraGMéaPlentz P DRöningJ (2021). Achieving reliable communication between kafka and ros through bridge codes. In: 2021 20th International Conference on Advanced Robotics (ICAR). p. 324–329

[127]

Lu W, Chen J, Fu Y, Pan Y, Ghansah F A, (2023). Digital twin-enabled human-robot collaborative teaming towards sustainable and healthy built environments. Journal of Cleaner Production, 412: 137412

[128]

LuWGiannikasVMcFarlaneDHydeJ (2014). The role of distributed intelligence in warehouse management systems. In: Borangiu T, Trentesaux D, Thomas A, editor. Service Orientation in Holonic and Multi-Agent Manufacturing and Robotics Cham: Springer International Publishing. 63–77

[129]

Luo S, Schomaker L, (2023). Reinforcement learning in robotic motion planning by combined experience-based planning and self-imitation learning. Robotics and Autonomous Systems, 170: 104545

[130]

Ma Z, Xiao M, Xiao Y, Pang Z, Poor H V, Vucetic B, (2019). High-reliability and low-latency wireless communication for internet of things: challenges, fundamentals, and enabling technologies. IEEE Internet of Things Journal, 6( 5): 7946–7970

[131]

Mahiri F, Najoua A, Ben Souda S, (2022). 5G-enabled iiot framework architecture towards sustainable smart manufacturing. International Journal of Online and Biomedical Engineering, 18( 4): 4–20 (iJOE)

[132]

Manuel Davila Delgado J, Oyedele L, (2022). Robotics in construction: A critical review of the reinforcement learning and imitation learning paradigms. Advanced Engineering Informatics, 54: 101787

[133]

Martínez Beltrán E T, Pérez M Q, Sánchez P M S, Bernal S L, Bovet G, Pérez M G, Pérez G M, Celdrán A H, (2023). Decentralized federated learning: fundamentals, state of the art, frameworks, trends, and challenges. IEEE Communications Surveys and Tutorials, 25( 4): 2983–3013

[134]

MartinovaLSokolovSPushkovR (2023). Integration of intelligent industrial systems into a workshop-level information network. In: 2023 International Russian Smart Industry Conference (SmartIndustryCon): 77–82

[135]

Mishra R B, El‐Atab N, Hussain A M, Hussain M M, (2021). Recent progress on flexible capacitive pressure sensors: from design and materials to applications. Advanced Materials Technologies, 6( 4): 2001023

[136]

Mohammed H, Romdhane L, Jaradat M A, (2021). RRT*N: an efficient approach to path planning in 3d for static and dynamic environments. Advanced Robotics, 35( 3–4): 168–180

[137]

Moon J S, Seo W, Ahn H, Kim J, (2024). Inspection of intermodular connection locations for multistory modular buildings. Measurement and Control, 57( 7): 949–965

[138]

Moragane H P M N L B, Perera B A K S, Palihakkara A D, Ekanayake B, (2024). Application of computer vision for construction progress monitoring: A qualitative investigation. Construction Innovation, 24( 2): 446–469

[139]

Muñoz-Bañón M Á, Velasco-Sánchez E, Candelas F A, Torres F, (2022). OpenStreetMap-based autonomous navigation with lidar naive-valley-path obstacle avoidance. IEEE Transactions on Intelligent Transportation Systems, 23( 12): 24428–24438

[140]

Mustafha M D A, Thamrin N M, Abdullah S A C, Mohamad Z, (2020). An IoT-based production monitoring system for assembly line in manufacture. International Journal of Integrated Engineering, 12: 38–45

[141]

Nam S, Yoon J, Kim K, Choi B, (2020). Optimization of prefabricated components in housing modular construction. Sustainability (Basel), 12( 24): 10269

[142]

Naranjo J E, Lozada E C, Espín H I, Beltran C, García C A, García M V, (2018). Flexible architecture for transparency of a bilateral tele-operation system implemented in mobile anthropomorphic robots for the oil and gas industry. IFAC-PapersOnLine, 51( 8): 239–244

[143]

Noormohammadi-AslAFanKSmithSLDautenhahnK (2024). Human leading or following preferences: effects on human perception of the robot and the human-robot collaboration

[144]

Nurlan Z, Zhukabayeva T, Othman M, Adamova A, Zhakiyev N, (2022). Wireless sensor network as a mesh: Vision and challenges. IEEE Access : Practical Innovations, Open Solutions, 10: 46–67

[145]

Oliff H, Liu Y, Kumar M, Williams M, Ryan M, (2020). Reinforcement learning for facilitating human-robot-interaction in manufacturing. Journal of Manufacturing Systems, 56: 326–340

[146]

Osti F, de Amicis R, Sanchez C A, Tilt A B, Prather E, Liverani A, (2021). A VR training system for learning and skills development for construction workers. Virtual Reality (Waltham Cross), 25( 2): 523–538

[147]

Ozioko O, Dahiya R, (2022). Smart tactile gloves for haptic interaction, communication, and rehabilitation. Advanced Intelligent Systems, 4( 2): 2100091

[148]

Pan M, Yang Y, Zheng Z, Pan W, (2022). Artificial intelligence and robotics for prefabricated and modular construction: A systematic literature review. Journal of Construction Engineering and Management, 148( 9): 03122004

[149]

PandaBLimJ HNoor MohamedN APaulS CTayY W DTanM J (2017). Automation of robotic concrete printing using feedback control system. In: 34th International Symposium on Automation and Robotics in Construction

[150]

Pare S, Kumar A, Singh G K, Bajaj V, (2020). Image segmentation using multilevel thresholding: a research review. Iranian Journal of Science and Technology. Transaction of Electrical Engineering, 44( 1): 1–29

[151]

Parisi F, Sangiorgio V, Parisi N, Mangini A M, Fanti M P, Adam J M, (2024). A new concept for large additive manufacturing in construction: tower crane-based 3D printing controlled by deep reinforcement learning. Construction Innovation, 24( 1): 8–32

[152]

Park S, Wang X, Menassa C C, Kamat V R, Chai J Y, (2024). Natural language instructions for intuitive human interaction with robotic assistants in field construction work. Automation in Construction, 161: 105345

[153]

PereiraC EDiedrichCNeumannP (2023). Communication protocols for automation. In: SY Nof, editor. Springer Handbook of Automation Cham: Springer International Publishing. 535–560

[154]

Pfeiffer S, (2016). Robots, industry 4.0 and humans, or why assembly work is more than routine work. Societies, 6( 2): 16

[155]

Prati A, Shan C, Wang K I K, (2019). Sensors, vision and networks: from video surveillance to activity recognition and health monitoring. Journal of Ambient Intelligence and Smart Environments, 11: 5–22

[156]

PullenTHallDLessingJ (2019). A preliminary overview of emerging trends for industrialized construction in the united states. 29

[157]

Pyo S, Lee J, Bae K, Sim S, Kim J, (2021). Recent progress in flexible tactile sensors for human–interactive systems: from sensors to advanced applications. Advanced Materials, 33( 47): 2005902

[158]

Qi J, Yang H, Sun H, (2021). MOD-rrt*: A sampling-based algorithm for robot path planning in dynamic environment. IEEE Transactions on Industrial Electronics, 68( 8): 7244–7251

[159]

Qin Z, Wang P, Sun J, Lu J, Qiao H, (2016). Precise robotic assembly for large-scale objects based on automatic guidance and alignment. IEEE Transactions on Instrumentation and Measurement, 65: 1398–1411

[160]

Rahman H F, Janardhanan M N, Nielsen P, (2020). An integrated approach for line balancing and AGV scheduling towards smart assembly systems. Assembly Automation, 40( 2): 219–234

[161]

Rahman M H, Xie C, Sha Z, (2021). Predicting sequential design decisions using the function-behavior-structure design process model and recurrent neural networks. Journal of Mechanical Design, 143: 081706

[162]

Rahman SMM, Wang Y, (2018). Mutual trust-based subtask allocation for human–robot collaboration in flexible lightweight assembly in manufacturing. Mechatronics, 54: 94–109

[163]

Rahul S G, Dhivyasri G, Kavitha P, Arungalai Vendan S, Kumar K A R, Garg A, Gao L, (2018). Model reference adaptive controller for enhancing depth of penetration and bead width during cold metal transfer joining process. Robotics and Computer-integrated Manufacturing, 53: 122–134

[164]

Rao A S, Radanovic M, Liu Y, Hu S, Fang Y, Khoshelham K, Palaniswami M, Ngo T, (2022). Real-time monitoring of construction sites: Sensors, methods, and applications. Automation in Construction, 136: 104099

[165]

Riordan A D O, Toal D, Newe T, Dooly G, (2019). Object recognition within smart manufacturing. Procedia Manufacturing, 38: 408–414

[166]

Rodrigues P B, Singh R, Oytun M, Adami P, Woods P J, Becerik-Gerber B, Soibelman L, Copur-Gencturk Y, Lucas G M, (2023). A multidimensional taxonomy for human-robot interaction in construction. Automation in Construction, 150: 104845

[167]

SantosM FVictorinoA C (2021). Autonomous vehicle navigation based in a hybrid methodology: Model based and machine learning based. 2021 IEEE International Conference on Mechatronics (ICM). p. 1–6

[168]

SeeligerBCollinsJ WPorpigliaFMarescauxJ (2022). The role of virtual reality, telesurgery, and teleproctoring in robotic surgery. In: Wiklund P, Mottrie A, Gundeti MS, Patel V, eds. Robotic Urologic Surgery Cham: Springer International Publishing. p. 61–77

[169]

Selvaggio M, Cognetti M, Nikolaidis S, Ivaldi S, Siciliano B, (2021). Autonomy in physical human-robot interaction: A brief survey. IEEE Robotics and Automation Letters, 6( 4): 7989–7996

[170]

Shamshiri A, Ryu K R, Park J Y, (2024). Text mining and natural language processing in construction. Automation in Construction, 158: 105200

[171]

Sherafat B, Rashidi A, Asgari S, (2022). Sound-based multiple-equipment activity recognition using convolutional neural networks. Automation in Construction, 135: 104104

[172]

Shi H, Li R, Bai X, Zhang Y, Min L, Wang D, Lu X, Yan Y, Lei Y, (2023a). A review for control theory and condition monitoring on construction robots. Journal of Field Robotics, 40( 4): 934–954

[173]

Shi L, Zhang Y, Cheng J, Lu H, (2020). Skeleton-based action recognition with multi-stream adaptive graph convolutional networks. IEEE Transactions on Image Processing, 29: 9532–9545

[174]

Shi Q, Sun Z, Le X, Xie J, Lee C, (2023b). Soft robotic perception system with ultrasonic auto-positioning and multimodal sensory intelligence. ACS Nano, 17( 5): 4985–4998

[175]

Shi Y, Shen G, (2024). Haptic sensing and feedback techniques toward virtual reality. Research, 7: 0333

[176]

Shu J, Li W, Gao Y, (2022). Collision-free trajectory planning for robotic assembly of lightweight structures. Automation in Construction, 142: 104520

[177]

Shu L, Mukherjee M, Pecht M, Crespi N, Han S N, (2018). Challenges and research issues of data management in IOT for large-scale petrochemical plants. IEEE Systems Journal, 12( 3): 2509–2523

[178]

Singla R, Sharma S, Sharma S K, (2023). Infrared imaging for detection of defects in concrete structures. IOP Conference Series. Materials Science and Engineering, 1289( 1): 012064

[179]

Slaton T, Hernandez C, Akhavian R, (2020). Construction activity recognition with convolutional recurrent networks. Automation in Construction, 113: 103138

[180]

SooriMDastresRArezooBKarimi Ghaleh JoughF (2024). Intelligent robotic systems in industry 4.0: A review. Journal of Advanced Manufacturing Science and Technology, 2024007–0

[181]

StjepandićJBondarSKorolW (2022). Object recognition findings in a built environment. In: Stjepandić J, Sommer M, Denkena B, eds. DigiTwin: An Approach for Production Process Optimization in a Built Environment. Cham: Springer International Publishing. p. 155–179

[182]

Su Y P, Chen X Q, Zhou C, Pearson L H, Pretty C G, Chase J G, (2023). Integrating virtual, mixed, and augmented reality into remote robotic applications: A brief review of extended reality-enhanced robotic systems for intuitive telemanipulation and telemanufacturing tasks in hazardous conditions. Applied Sciences (Basel, Switzerland), 13( 22): 12129

[183]

Sun Y, Jia J, Xu J, Chen M, Niu J, (2022). Path, feedrate and trajectory planning for free-form surface machining: A state-of-the-art review. Chinese Journal of Aeronautics, 35( 8): 12–29

[184]

Syafrudin M, Alfian G, Fitriyani N L, Rhee J, (2018). Performance analysis of iot-based sensor, big data processing, and machine learning model for real-time monitoring system in automotive manufacturing. Sensors, 18( 9): 2946

[185]

Tehrani M B, BuHamdan S, Alwisy A, (2023). Robotics in assembly-based industrialized construction: A narrative review and a look forward. International Journal of Intelligent Robotics and Applications, 7: 556–574

[186]

Thai H T, Ngo T, Uy B, (2020). A review on modular construction for high-rise buildings. Structures, 28: 1265–1290

[187]

Thakar S, Rajendran P, Kabir A M, Gupta S K, (2022). Manipulator motion planning for part pickup and transport operations from a moving base. IEEE Transactions on Automation Science and Engineering, 19( 1): 191–206

[188]

Tulbure A A, Tulbure A A, Dulf E H, (2022). A review on modern defect detection models using dcnns – Deep convolutional neural networks. Journal of Advanced Research, 35: 33–48

[189]

UrgoMBerardinucciFZhengPWangL (2024). AI-based pose estimation of human operators in manufacturing environments. In: Tolio T, ed. CIRP Novel Topics in Production Engineering: Volume 1. Cham: Springer Nature Switzerland. p. 3–38

[190]

Vasquez J, Furuhata T, Shimada K, (2023). Image-enhanced u-net: optimizing defect detection in window frames for construction quality inspection. Buildings, 14( 1): 3

[191]

Viana D, Tommelein I, Formoso C, (2017). Using modularity to reduce complexity of industrialized building systems for mass customization. Energies, 10( 10): 1622

[192]

Vlantis P, Bechlioulis C P, Kyriakopoulos K J, (2023). Robot navigation in complex workspaces employing harmonic maps and adaptive artificial potential fields. Sensors, 23( 9): 4464

[193]

Vogel-Heuser B, Fischer J, Feldmann S, Ulewicz S, Rösch S, (2017). Modularity and architecture of plc-based software for automated production systems: An analysis in industrial companies. Journal of Systems and Software, 131: 35–62

[194]

Wagner H J, Alvarez M, Kyjanek O, Bhiri Z, Buck M, Menges A, (2020). Flexible and transportable robotic timber construction platform – tim. Automation in Construction, 120: 103400

[195]

Wang B, Hu S J, Sun L, Freiheit T, (2020a). Intelligent welding system technologies: State-of-the-art review and perspectives. Journal of Manufacturing Systems, 56: 373–391

[196]

Wang K J, Santoso D, (2022). A smart operator advice model by deep learning for motion recognition in human–robot coexisting assembly line. International Journal of Advanced Manufacturing Technology, 119( 1-2): 865–884

[197]

Wang L, Liu M, Meng M Q H, (2017a). A hierarchical auction-based mechanism for real-time resource allocation in cloud robotic systems. IEEE Transactions on Cybernetics, 47: 473–484

[198]

Wang M, Hu D, Chen J, Li S, (2023a). Underground infrastructure detection and localization using deep learning enabled radargram inversion and vision based mapping. Automation in Construction, 154: 105004

[199]

Wang M, Wang C C, Sepasgozar S, Zlatanova S, (2020b). A systematic review of digital technology adoption in off-site construction: current status and future direction towards industry 4.0. Buildings, 10( 11): 204

[200]

Wang N, Issa R R A, Anumba C J, (2022a). NLP-based query-answering system for information extraction from building information models. Journal of Computing in Civil Engineering, 36( 3): 04022004

[201]

Wang Q, Tan Y, Mei Z, (2020c). Computational methods of acquisition and processing of 3D point cloud data for construction applications. Archives of Computational Methods in Engineering, 27( 2): 479–499

[202]

Wang S, Ouyang J, Li D, Liu C, (2017b). An integrated industrial ethernet solution for the implementation of smart factory. IEEE Access: Practical Innovations, Open Solutions, 5: 25455–25462

[203]

Wang W, Chen Z, Chen X, Wu J, Zhu X, Zeng G, Luo P, Lu T, Zhou J, Qiao Y, Dai J, (2023b). VisionLLM: Large language model is also an open-ended decoder for vision-centric tasks. Advances in Neural Information Processing Systems, 36: 61501–61513

[204]

Wang X, Liang C J, Menassa C C, Kamat V R, (2021a). Interactive and immersive process-level digital twin for collaborative human-robot construction work. Journal of Computing in Civil Engineering, 35( 6): 04021023

[205]

Wang X, Veeramani D, Zhu Z, (2023c). Wearable sensors-based hand gesture recognition for human-robot collaboration in construction. IEEE Sensors Journal, 23( 1): 495–505

[206]

WangXYuHMcgeeWMenassaCKamatV (2024). Enabling building information model-driven human-robot collaborative construction workflows with closed-loop digital twins

[207]

Wang Z, Bai X, Zhang S, Billinghurst M, He W, Wang Y, Han D, Chen G, Li J, (2021b). The role of user-centered ar instruction in improving novice spatial cognition in a high-precision procedural task. Advanced Engineering Informatics, 47: 101250

[208]

Wang Z, Wang E, Zhu Y, (2020d). Image segmentation evaluation: A survey of methods. Artificial Intelligence Review, 53( 8): 5637–5674

[209]

Wang Z, Zhang Y, Mosalam K M, Gao Y, Huang S L, (2022b). Deep semantic segmentation for visual understanding on construction sites. Computer-Aided Civil and Infrastructure Engineering, 37( 2): 145–162

[210]

Willmann J, Knauss M, Bonwetsch T, Apolinarska A A, Gramazio F, Kohler M, (2016). Robotic timber construction — Expanding additive fabrication to new dimensions. Automation in Construction, 61: 16–23

[211]

Wu C, Li X, Guo Y, Wang J, Ren Z, Wang M, Yang Z, (2022a). Natural language processing for smart construction: Current status and future directions. Automation in Construction, 134: 104059

[212]

Wu L, Lu W, Xue F, Li X, Zhao R, Tang M, (2022b). Linking permissioned blockchain to internet of things (Iot)-BIM platform for off-site production management in modular construction. Computers in Industry, 135: 103573

[213]

Xia P, Xu F, Song Z, Li S, Du J, (2023). Sensory augmentation for subsea robot teleoperation. Computers in Industry, 145: 103836

[214]

Xiao B, Kang S C, (2021). Development of an image data set of construction machines for deep learning object detection. Journal of Computing in Civil Engineering, 35( 2): 05020005

[215]

Yang C H, Kang S C, (2021). Collision avoidance method for robotic modular home prefabrication. Automation in Construction, 130: 103853

[216]

Yang L, Li P, Qian S, Quan H, Miao J, Liu M, Hu Y, Memetimin E, (2023). Path planning technique for mobile robots: A review. Machines, 11( 10): 980

[217]

Yasin J N, Mohamed S A S, Haghbayan M H, Heikkonen J, Tenhunen H, Plosila J, (2021). Low-cost ultrasonic based object detection and collision avoidance method for autonomous robots. International Journal of Information Technology : An Official Journal of Bharati Vidyapeeth’s Institute of Computer Applications and Management, 13( 1): 97–107

[218]

Yeom J, Jung M, Kim Y, (2017). Detecting damaged building parts in earthquake-damaged areas using differential seeded region growing. International Journal of Remote Sensing, 38( 4): 985–1005

[219]

Yin C, Wang B, Gan V J L, Wang M, Cheng J C P, (2021). Automated semantic segmentation of industrial point clouds using respointnet++. Automation in Construction, 130: 103874

[220]

Yin X, Liu H, Chen Y, Al-Hussein M, (2019). Building information modelling for off-site construction: Review and future directions. Automation in Construction, 101: 72–91

[221]

Yoshino D, Watanobe Y, Naruse K, (2021). A highly reliable communication system for internet of robotic things and implementation in rt-middleware with amqp communication interfaces. IEEE Access: Practical Innovations, Open Solutions, 9: 167229–167241

[222]

Younesi Heravi M, Jang Y, Jeong I, Sarkar S, (2024). Deep learning-based activity-aware 3d human motion trajectory prediction in construction. Expert Systems with Applications, 239: 122423

[223]

Yu X, Fan Y, Xu S, Ou L, (2022). A self-adaptive sac-pid control approach based on reinforcement learning for mobile robots. International Journal of Robust and Nonlinear Control, 32( 18): 9625–9643

[224]

Yuan C, Xiong B, Li X, Sang X, Kong Q, (2022). A novel intelligent inspection robot with deep stereo vision for three-dimensional concrete damage detection and quantification. Structural Health Monitoring, 21( 3): 788–802

[225]

Yuan C, Zhang W, Liu G, Pan X, Liu X, (2020). A heuristic rapidly-exploring random trees method for manipulator motion planning. IEEE Access : Practical Innovations, Open Solutions, 8: 900–910

[226]

Yuan Y, Ye S, Lin L, (2021). Process monitoring with support of iot in prefabricated building construction. Sensors and Materials, 33( 4): 1167

[227]

Yumnam M, Gupta H, Ghosh D, Jaganathan J, (2021). Inspection of concrete structures externally reinforced with frp composites using active infrared thermography: A review. Construction & Building Materials, 310: 125265

[228]

YunSParkS HSeoP HShinJ (2023). IFSeg: image-free semantic segmentation via vision-language model. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. p. 2967–2977

[229]

Zeng T, Ferdowsi A, Semiari O, Saad W, Hong C S, (2024a). Convergence of communications, control, and machine learning for secure and autonomous vehicle navigation. IEEE Wireless Communications, 31( 4): 132–138

[230]

Zeng T, Mohammad A, Madrigal A G, Axinte D, Keedwell M, (2024b). A robust human–robot collaborative control approach based on model predictive control. IEEE Transactions on Industrial Electronics, 71( 7): 7360–7369

[231]

Zhang B, Zhu Z, Hammad A, Aly W, (2018). Automatic matching of construction onsite resources under camera views. Automation in Construction, 91: 206–215

[232]

Zhang C, Yin Z, Qin R, (2024). Attention-enhanced co-interactive fusion network (aecif-net) for automated structural condition assessment in visual inspection. Automation in Construction, 159: 105292

[233]

Zhang H, Li X, Kan Z, Zhang X, Li Z, (2021). Research on optimization of assembly line based on product scheduling and just-in-time feeding of parts. Assembly Automation, 41( 5): 577–588

[234]

ZhangJ (2022). Architecture of control system of automobile inspection line based on can bus. In: 2022 IEEE 2nd International Conference on Power, Electronics and Computer Applications (ICPECA): 634–638

[235]

Zhang M, Xu R, Wu H, Pan J, Luo X, (2023a). Human–robot collaboration for on-site construction. Automation in Construction, 150: 104812

[236]

Zhang R, Lv Q, Li J, Bao J, Liu T, Liu S, (2022a). A reinforcement learning method for human-robot collaboration in assembly tasks. Robotics and Computer-integrated Manufacturing, 73: 102227

[237]

Zhang Y, Lei Z, Han S, Bouferguene A, Al-Hussein M, (2020). Process-oriented framework to improve modular and offsite construction manufacturing performance. Journal of Construction Engineering and Management, 146( 9): 04020116

[238]

ZhangYLiSNolanK JZanottoD (2019). Adaptive assist-as-needed control based on actor-critic reinforcement learning. In: 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): 4066–4071

[239]

Zhang Z, Chen J, Guo Q, (2022b). Application of automated guided vehicles in smart automated warehouse systems: A survey. Computer Modeling in Engineering & Sciences, 0( 0): 1–10

[240]

Zhang Z, Wong M O, Pan W, (2023b). Virtual reality enhanced multi-role collaboration in crane-lift training for modular construction. Automation in Construction, 150: 104848

[241]

Zhao P, Chang Y, Wu W, Luo H, Zhou Z, Qiao Y, Li Y, Zhao C, Huang Z, Liu B, Liu X, He S, Guo D, (2023). Dynamic RRT: Fast feasible path planning in randomly distributed obstacle environments. Journal of Intelligent & Robotic Systems, 107( 4): 48

[242]

Zhao X, Cheah C C, (2023). BIM-based indoor mobile robot initialization for construction automation using object detection. Automation in Construction, 146: 104647

[243]

ZhengXZhangCWoodlandP C (2021). Adapting gpt, gpt-2 and bert language models for speech recognition. In: 2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU): 162–168

[244]

Zheng Z, Zhang Z, Pan W, (2020). Virtual prototyping- and transfer learning-enabled module detection for modular integrated construction. Automation in Construction, 120: 103387

[245]

Zhou H, Ouyang W, Cheng J, Wang X, Li H, (2019). Deep continuous conditional random fields with asymmetric inter-object constraints for online multi-object tracking. IEEE Transactions on Circuits and Systems for Video Technology, 29( 4): 1011–1022

[246]

Zhou J X, Shen G Q, Yoon S H, Jin X, (2021). Customization of on-site assembly services by integrating the internet of things and bim technologies in modular integrated construction. Automation in Construction, 126: 103663

[247]

Zhou T, Zhu Q, Ye Y, Du J, (2023). Humanlike inverse kinematics for improved spatial awareness in construction robot teleoperation: design and experiment. Journal of Construction Engineering and Management, 149( 7): 04023044

[248]

Zhou Y, Ji A, Zhang L, (2022). Sewer defect detection from 3d point clouds using a transformer-based deep learning model. Automation in Construction, 136: 104163

[249]

Zhou Y, Zhao J, Lu P, Wang Z, He B, (2024). TacSuit: a wearable large-area, bioinspired multimodal tactile skin for collaborative robots. IEEE Transactions on Industrial Electronics, 71( 2): 1708–1717

[250]

Zhu A, Pauwels P, De Vries B, (2021). Smart component-oriented method of construction robot coordination for prefabricated housing. Automation in Construction, 129: 103778

[251]

Zhu K, Zhang T, (2021). Deep reinforcement learning based mobile robot navigation: A review. Tsinghua Science and Technology, 26( 5): 674–691

[252]

Zhu Z, Ng D W H, Park H S, McAlpine M C, (2020). 3D-printed multifunctional materials enabled by artificial-intelligence-assisted fabrication technologies. Nature Reviews. Materials, 6( 1): 27–47

RIGHTS & PERMISSIONS

Higher Education Press

AI Summary AI Mindmap
PDF (2642KB)

1296

Accesses

0

Citation

Detail

Sections
Recommended

AI思维导图

/