1 Introduction
Global agriculture is under increasing pressure due to rapid population growth, climate change and resource depletion, all of which threaten food security and sustainability
[1,
2]. By 2050, the global population is expected to reach 9.7 billion, necessitating a projected 70% increase in food production to meet rising demand
[3]. Current farming methods, which rely on manual labor and standard management strategies, are proving insufficient in addressing these challenges
[4]. As a result, Agriculture 4.0 has emerged as a data-driven, automated and intelligent farming paradigm, driven by embedded systems that integrate artificial intelligence (AI), the Internet of Things (IoT), robotics, edge AI, unmanned aerial vehicles (UAVs), cloud computing and augmented reality (AR)
[5]. These advancements enable farmers to monitor real-time data, optimize resource usage and implement predictive decision-making, leading to more efficient and sustainable agricultural practices
[6].
Of these emerging technologies, AR has gained attention as a powerful tool for precision agriculture, offering interactive visualization, on-the-spot decision support and data-driven analysis that supports optimized decision-making in farming practices
[6]. AR enables farmers to overlay digital information onto physical surroundings, enhancing spatial awareness and supporting timely, location-based decisions
[7]. Although AI systems can generate powerful analytical insights, they often lack the spatial context and environmental interactivity required for practical deployment in the field
[8]. Without a contextual interface such as AR, AI outputs remain abstract and disconnected from farm-level actions. AR is essential for localizing, operationalizing and translating AI-generated data into actionable workflows for precision agriculture.
Current applications of AR in Agriculture 4.0 span multiple domains (Fig.1), including fertilizer and water management, AI decision support, ripeness detection, crop recommendation, disease identification and pest analysis
[9]. These applications highlight the versatility of AR in modern farming. Technological innovations in smart farming are expanding the potential of AR by enabling integration with AI, IoT, robotics and UAV-assisted sensing technologies, paving the way for more advanced applications in precision agriculture
[1,
10].
The impact of plant diseases, which cause 10% to 16% yield loss annually and have an estimated economic cost of 220 billion USD globally
[11], underscores the importance of early detection to ensure food security and prevent crop failures. Together, AI and AR can transform plant disease detection by providing non-destructive, rapid and precise diagnostic tools. Techniques such as image recognition models integrated with AR, combined with AI decision models, can significantly improve the accuracy of disease identification
[10].
The integration of AR with IoT and sensor networks can also transform how farmers monitor soil conditions, crop health and environmental factors. Recent studies show that AR-based decision support systems combined with AI image recognition and deep learning (DL) models can significantly improve pest and disease detection
[10]. In addition, UAVs equipped with hyperspectral imaging and AI analytics can work in conjunction with AR to enable large-scale farm mapping, pesticide spraying and site-specific fertilization
[9].
Although AR has been studied within the context of individual technologies, such as AI, IoT, UAVs, robotics and edge AI, no existing review has examined how it functions across these systems collectively or considered its potential as a unifying interface within embedded agricultural architectures. To address this gap, this review asks the research question, how can AR visualization and overlays serve as a central interface for integrating and enabling the functional deployment of diverse embedded technologies in data-driven precision agriculture? The novelty of this research lies in framing AR not as a peripheral tool, but as a central interface that spatially and functionally links these components of smart farming systems. The main contributions of this review include: (1) a structured synthesis of AR applications across embedded technologies; (2) a conceptual illustration of the unifying role of AR within precision farming workflows; and (3) the identification of key research gaps and future directions to guide scalable, accessible AR deployments. These findings offer valuable insights for farmers, policymakers and stakeholders seeking to adopt AR-enabled agricultural solutions.
This paper introduces a conceptual AR-centered architecture for embedded smart farming systems, then explores AR integration in IoT-based smart farming, analyzes AR applications in agricultural robotics and machinery, and examines UAV-AR systems for crop monitoring and diagnostics. Next it reviews AI-driven decision support and image recognition in AR contexts, and evaluates AR-enabled edge computing frameworks. The discussion covers contributions, research gaps, study limitations and future research directions, followed by a summary of findings, implications for stakeholders and broader contributions to the field.
2 Augmented reality as a central interface in embedded smart farming systems
To demonstrate the integrative role of AR, this section presents a conceptual illustration of how AR can function as the core interface within embedded smart farming systems. Rather than proposing a new system, the model synthesizes current research to illustrate how AR can coordinate sensing, diagnostics and interventions across multiple embedded technologies. Unlike current implementations that treat AR, AI, IoT, UAVs and edge computing as disconnected tools, the model shown in Fig.2 positions AR as the decision-support layer linking these components, enabling farmers to interact with data and AI recommendations through a field-based interface.
The system, as illustrated in Fig.2, initiates with aerial and ground-based sensors detecting deviations in expected crop conditions and transmitting spatial coordinates through the IoT network. AR-enabled glasses or mobile devices either guide the farmer to the affected area or are used via the AR interface to direct machinery, such as a pesticide drone, to the site. Once at the location, the farmer can scan the problem area through the AR interface, which relays visual data to an edge AI module for analysis. Alternatively, if machinery has been deployed, a connected camera feed, accessed through the AR system, can transmit visual data from the equipment to the edge AI module. If the edge AI confirms an issue, the recommended actions can be relayed through the AR interface, whether on smart glasses or a mobile device, providing the farmer with auditory guidance on appropriate responses or suggesting machine-based interventions. Depending on the situation, the farmer may act directly or use the AR system to guide robotic machinery to perform the necessary tasks. If no issue is detected by the edge AI module, data can be escalated to a cloud-based model for further analysis, if internet connectivity is available. If uncertainty remains, the farmer may proceed with manual inspection as a final step. This forms a continuous loop of detection, diagnosis and response, with AR functioning as the central interface for both human- and machine-led interventions.
3 Integration of augmented reality with IoT networks
3.1 AR as an interface layer for IoT-based farming
Although IoT systems generate vast streams of agricultural data, their utility often depends on how effectively farmers can interpret and act on that information. AR enhances IoT usability by anchoring sensor data within the spatial layout of the farm, allowing farmers to make immediate, location-specific decisions. In the conceptual model, AR serves as the interface layer through which IoT-based alerts, such as irrigation inefficiencies, nutrient imbalances, or microclimate anomalies, are delivered to the farmer. This section reviews current research demonstrating how AR could improve the interpretability, responsiveness and practical deployment of IoT in precision agriculture.
3.2 AR-integrated smart irrigation systems
An AR-integrated smart irrigation system was developed to optimize water usage and irrigation efficiency by combining AR, IoT and machine learning (ML)
[12]. The AR-based user interface of the system enables farmers to visualize real-time soil moisture, temperature and humidity data while adjusting irrigation schedules through AR-enabled controls. IoT sensors continuously monitor environmental conditions, while ML models (SARIMA and exponential smoothing) predict optimal irrigation timing. The AR interface built using Unity, Vuforia and ARCore, overlays on-the-spot data onto the physical farm environment and demonstrates how IoT and AR could collaboratively enhance irrigation accuracy and minimize water loss.
Another study developed an IoT and ML-based irrigation system using long-range wireless area networks to remotely monitor environmental and agricultural parameters, including air temperature, pressure, humidity and soil moisture
[13]. The system incorporated a novel wind-driven optimization algorithm, which improved the prediction accuracy of irrigation-related data achieving 87.5% accuracy in forecasting soil moisture and irrigation timing. Integrating AR could further enhance usability by overlaying forecast data and soil moisture conditions directly onto the field of view of the user through smart glasses or mobile devices, enabling farmers to make location-specific irrigation decisions in real time without navigating complex dashboards or remote interfaces.
AR could also support IoT-based water management by visually conveying nutrient runoff patterns and environmental risks in real time. When combined with IoT water quality sensors, AR could serve as an interactive decision-support tool for live monitoring of runoff contamination. The Affluent Effluent system, developed for Microsoft HoloLens, integrates AR with a system dynamics model to simulate fertilizer runoff, oxygen depletion and algal bloom formation in water bodies
[14]. Built using Unity and ShaderLab, the AR application allows users to manipulate nutrient levels, aeration and hydrological variables. By making invisible to the eye water quality changes perceptible, AR has the potential to complement IoT water monitoring by reducing agricultural runoff pollution.
3.3 Optimizing IoT sensor placement and fertigation
Efficient IoT sensor placement is essential for maximizing data accuracy while minimizing costs
[15]. A quantum deep reinforcement learning model, enhanced with a modified wild geese algorithm, improved environmental parameter monitoring (temperature, humidity, air and water quality) with 96.4% accuracy while reducing sensor deployment costs by 15%
[16]. AR-based visualization tools could complement this approach by enabling interactive assessment of sensor placement, ensuring optimal distribution for large-scale agricultural operations. By overlaying sensor coverage maps onto the physical farm environment, AR interfaces could help farmers identify gaps in data collection and refine sensor deployment strategies.
IoT fertigation automation has demonstrated its potential in optimizing resource efficiency while maintaining high crop yields. A fertigation system designed for banana cultivation found that a −50 kPa irrigation strategy with 50% of the recommended dose of fertilizer resulted in 26.0% water savings while sustaining productivity at 102 t·ha
–1[17]. Integrating AR-based interfaces with monitoring systems could enhance decision-making by allowing farmers to visualize live nutrient distribution and soil conditions through augmented overlays, while receiving recommendations from on-device edge AI for optimal fertilizer application.
Fertigation management could further benefit from the ability of AR to facilitate noninvasive trichome density measurement, aiding in the assessment of fertilizer-induced stress in crops such as tomatoes. A smartphone-based AR system was developed that quantifies trichome density using calibration markers on measurement paper, ensuring precise image analysis
[18]. Computer vision algorithms process these images, while ML models predict fertilizer stress with high performance, achieving a precision-recall area under the curve of 0.82, a receiver operating characteristic area under the curve of 0.64 and a strong correlation with observed stress levels (
r = 0.79). This system operates entirely on-device, eliminating cloud dependency, and could integrate with IoT sensors to enable synchronized detection of nutrient stress conditions in the field.
3.4 AR and IoT in pest and disease detection
IoT pest and disease detection systems have significantly advanced real-time monitoring in precision agriculture. For example, a convolutional neural network (CNN) based intelligent pest management system achieved 97.5% accuracy in detecting oriental fruit flies, outperforming existing machine learning models such as support vector machines, k-nearest neighbors and random forest
[19]. Combining IoT with AR overlays could further advance functionality by displaying live pest density heatmaps, enabling farmers to visually assess infestation severity and implement AI suggested intervention strategies more effectively.
IoT systems have also improved crop health assessment by integrating sensor data with AI models. A recursive segmentation model developed for tracking bok choy (a kind Chinese cabbage) growth achieved high segmentation accuracy, outperforming common image recognition models such as Mask R-CNN and YOLO, which struggle with foliage occlusion
[20]. AR-assisted visualization could further improve plant growth monitoring by overlaying segmentation results directly onto the field view, allowing farmers to detect early stress indicators more accurately.
IoT disease monitoring systems have also advanced crop diagnostics by using DL and embedded sensor networks
[21]. For example, a smart crop disease monitoring system using deep residual networks and an optimized routing framework demonstrated 94.3% accuracy in classifying rice diseases (brown spot, bacterial leaf blight and leaf Smut), outperforming existing models while reducing energy consumption and latency
[22]. AR could optimize its usability by generating on-the-spot severity overlays and interactive treatment recommendations on AR interfaces.
3.5 AR-enabled IoT for digital agriculture
IoT wireless sensor networks are key for remote farming environments, where live monitoring is necessary, but internet connectivity is often limited
[23]. For example, a low-power, internet-independent WSN using nRF24L01 + transceivers has been shown to effectively monitor air temperature, humidity, soil moisture and battery voltage while ensuring reliable data transmission with minimal power consumption
[24]. AR integration could aid these systems by displaying stored sensor data in visual overlays, allowing farmers to assess crop conditions without requiring a constant internet connection.
AR is also being evaluated in combination with digital technologies such as blockchain to improve agricultural traceability. Studies combining AR with AI, IoT, 5G and blockchain have demonstrated how visual overlays can enhance transparency in food systems by linking real-time disease visualization with securely stored agricultural data
[25].
3.6 IoT-based crop recommendation and decision support with AR
Integrating AR with IoT crop recommendation systems could enable farmers to make immediate, data-based decisions. A smart crop recommendation system was developed using ESP32 microcontrollers to collect current soil and environmental data, which were then processed by a DL model trained on the Kaggle Crop Recommendation Data set, achieving 97% accuracy
[26]. The system utilizes AR to enhance user interaction by overlaying AI crop recommendations onto the physical farm environment. Through a mobile device, farmers can access live data from IoT sensors, visualizing optimal crop choices and soil conditions in an augmented interface. Built with Unity and Vuforia, the AR application provided interactive decision support, allowing users to assess recommended crops and soil management strategies more effectively.
4 Augmented reality in smart farm machinery and robotics
4.1 AR as a control and diagnostic interface for smart machinery
In the integrated AR-centered system described above, machinery and robotics represent one of the key endpoints of the decision-making loop, activated by the farmer through AR-based interfaces. AR facilitates this interaction by acting as both a visualization and control interface for smart agricultural machinery. Whether guiding the farmer through maintenance tasks or interfacing with autonomous equipment, AR overlays key operational metrics, such as fuel levels, machine diagnostics and system alerts, onto the field of view of the user. This includes both semi-automated systems and fully autonomous vehicles such as self-driving tractors and robotic sprayers, which can be directed or monitored via AR interfaces.
This section reviews current research on AR integration in agricultural machinery and robotics, with emphasis on machine monitoring, predictive maintenance, gesture-based control and autonomous field operations. Across these applications, AR functions as an interface layer that enables more responsive, precise and interactive human-machine coordination for farmers.
4.2 AR-enabled tractor interfaces and predictive maintenance
AR integration in tractor systems is emerging as a key application of smart farming machinery, especially for optimizing operation and reducing downtime. Although standard dashboards offer static readouts, AR interfaces allow real-time overlays of navigation guidance, fuel consumption and load balancing directly within the field of view of the operator
[27,
28]. These systems increasingly incorporate IoT-connected sensors and AI-based diagnostics to deliver contextual alerts and actionable system feedback.
Beyond routine operation, AR-enabled predictive maintenance tools are gaining support. Smart glasses equipped with AR can display internal sensor data directly on tractor components, assisting technicians in fault detection and repair. By highlighting wear indicators, error codes or maintenance checklists in situ, AR can reduce repair time and human error
[28]. These interfaces are particularly useful in remote areas where expert technicians may not be readily available, enabling local operators to perform guided repairs or relay live diagnostics to remote experts.
4.3 AR interfaces for gesture control and automation in agricultural machinery and robotics
Researchers have evaluated gesture-based control mechanisms to improve user interaction with AR systems in smart farming. One approach optimized AR usability using Fitts’ Law, refining gesture-based interactions for crop monitoring, machinery management and decision support
[29]. A refined 3D spatial interaction model incorporated head movement tracking and ergonomic adjustments, reducing task complexity by 40%. Implemented on Microsoft HoloLens 2 using unity extended reality, and simultaneous localization and mapping, the system improved operational efficiency for farm workers through adaptation to user movement using quaternion-based tracking and genetic optimization of interaction parameters.
Gesture-based AR interfaces also show promise in advancing hands-free control of robotic systems. Integrating gesture recognition with AR navigation allows operators to control tractors, drones and robotic tools through simple hand motions, minimizing dependence on standard control panels
[30]. Although gesture control supports active operation, AR could also assist in supervisory roles. For example, a spatial AI-enabled robotic platform for wheat farming demonstrated how embedded depth sensing and object recognition could support autonomous navigation, obstacle avoidance and real-time crop row detection
[31]. In similar autonomous systems, AR interfaces could provide status updates, alerts and decision logs when operator oversight or intervention is needed.
Recent work has also integrated AR interfaces with mobile field robots to support crop monitoring and control. One system enables growers and technicians to interact with an autonomous robot via an AR headset, facilitating remote operation and spatially guided navigation for field assessment
[32]. This platform supports live data exchange and control, allowing users to teleoperate the robot, request graphical crop updates or direct data collection. These advancements highlight how AR-based spatial interaction models and gesture-driven interfaces have the potential to enable more seamless human-machine collaboration across both robotic and mechanized farming systems.
4.4 AR-integrated harvesting systems
AR-assisted harvesting systems could also streamline fruit-picking processes by integrating object detection models with interactive visualization tools. BerryScope, an AR-assisted strawberry picking aid, integrates Microsoft HoloLens 2 with deep neural networks for real-time ripeness detection
[33]. The system captures images using an optical see-through head-mounted display, processes them with the QueryInst instance segmentation model and classifies ripeness using ResNet18. Fully ripe strawberries are highlighted with AR overlayed bounding boxes, guiding users during harvesting. In field trials, BerryScope users achieved a 93% success rate in selecting ripe fruit, surpassing current manual harvesting performance.
In mechanized harvesting, AR is also being used to support tractor-based workflows. One system integrates AR with deep reinforcement learning to simulate harvesting scenarios and guide navigation and obstacle avoidance
[34]. Through an immersive interface, users can monitor machine status, adjust parameters and initiate harvesting routines, supporting both manual and autonomous operations.
Beyond control and visualization, AR has been shown improve operator safety and system awareness in large-scale harvesting machines such as combines and loaders. Research on AR applications in heavy machinery shows that integrating see-through interfaces and spatial overlays reduces cognitive load and increases situational awareness for machine operators
[35]. These interfaces help monitor alerts, visualize hidden hazards and coordinate with other autonomous or semiautonomous systems in the field, enhancing harvesting efficiency, safety and precision under varying conditions.
5 Augmented reality-integrated UAVs
5.1 AR as a visual analytics interface for UAV-assisted agriculture
In the integrated AR system described above, UAVs serve as a key input layer for detecting anomalies across large-scale farms. By integrating aerial sensing with AR interfaces, UAVs expand the diagnostic reach and spatial coverage of the system. AR could enhance UAV applications by linking aerial data with context-specific visual overlays, transforming passive observation into interactive diagnostics. This allows farmers to visualize vegetation stress, soil variability or crop health conditions overlaid directly onto UAV feeds via AR-enabled devices, supporting rapid intervention.
This section reviews recent advances in AR-UAV integration, highlighting how AR serves both as a visual analytics interface and a control mechanism in tasks such as disease detection, fertilization, soil assessment, navigation and targeted pesticide application.
5.2 Smart farming with AR-integrated UAVs
AI-enabled UAVs, when integrated with AR interfaces, streamline large-scale crop surveillance and enable faster agronomic decision cycles. Standard scouting methods require time-consuming ground assessments, whereas UAVs can efficiently scan large fields while capturing high-resolution images for immediate analysis
[36].
The integration of multimodal sensors such as red, green, blue (RGB), multispectral, LiDAR and thermal cameras, allows UAVs to detect environmental variations and plant health issues with greater accuracy. AI-powered AR overlays could further optimize this process by visualizing crop health indicators directly onto UAV footage, enabling farmers to pinpoint problem areas and take immediate corrective actions
[37]. To illustrate this approach, Fig.3 shows a drone image taken by the authors in a remote farming region near the Caribbean coast of Colombia. The red boxes highlight areas of possible vegetation stress, identified using color segmentation and edge detection ML models.
One study used a CNN-based UAV system to detect leaf diseases in rice, potatoes and corn, generating AR disease severity maps directly on the UAV video feed. This improved precision diagnostics and allowed for more targeted pesticide applications
[38]. AI-enabled UAVs integrated with AR-generated 3D farm maps have also been shown to optimize soil condition analysis, irrigation planning and preharvest decision-making, providing farmers with the necessary data to maximize crop yields
[39].
UAV-based AR visualization may also streamline collaborative decision-making by allowing multiple users, farmers, agronomists and researchers, to interact with the same data sets in real time
[40]. This reduces subjective interpretations and improves overall agricultural planning.
5.3 AR-integrated UAVs for fertilization, soil assessment and crop decision-making
AR-integrated UAVs have the potential to improve both accuracy and sustainability in fertilization. AI, IoT and AR-powered drone systems can use markerless AR mapping to scan fields, analyze soil conditions and apply nutrients only where needed
[41,
42]. This reduces over-application and ensures targeted nutrient delivery, minimizing environmental impact
[43].
Multi-UAV coordination is also transforming agricultural operations. AR-integrated digital twins allow farmers to visualize drone activity in the field, supporting coordinated spraying, fertilization and disease monitoring
[39]. These AR-enhanced models help optimize drone routes and treatment timing by providing a spatially accurate digital view of the farm.
Beyond fertilization, UAVs equipped with NDVI and LiDAR-based mapping technologies are advancing soil assessment by capturing high-resolution data on nutrient levels, fertility, pH and moisture variation across the field
[44]. Although these sensing systems operate primarily through IoT-enhanced UAV platforms, AR could be used to overlay this spatial data onto the visual field of farmers, supporting more precise interpretation and guiding input decisions based on current soil conditions.
Drone-assisted AR soil sampling has been shown to be more efficient than current methods, generating detailed soil property maps that guide both fertilization and irrigation strategies
[45]. Farmers can also overlay AI-generated soil recommendations onto AR interfaces, potentially allowing for interactive, location-specific management adjustments.
In addition, UAVs enable ongoing soil monitoring, detecting early signs of degradation that could affect yields
[46]. By combining AR-driven soil analysis with AI decision tools, farmers could make more informed crop selection decisions, leading to improved productivity and better resource use.
5.4 AR for UAV navigation and gesture-controlled interfaces
Effective navigation and flight optimization are essential for UAV-assisted farming, as challenges including terrain obstacles, wind conditions and in-field adjustments must be managed for reliable deployment
[47]. An AR-enhanced UAV manipulation system improved path planning and obstacle avoidance, enabling farmers to set flight paths interactively while minimizing collision risks during pest scouting and crop disease mapping
[48]. By overlaying live environmental data onto AR interfaces, UAVs could adjust routes rapidly and responsively to ensure coverage and safety.
Gesture-driven AR interfaces further support hands-free drone control, reducing cognitive load during complex field inspections
[49]. In challenging terrain, farmers can use AR gesture recognition to control UAVs without relying on standard hand-held controllers, improving usability in dynamic farm environments
[48].
6 Augmented reality and AI image
6.1 AR as an interface for AI decision support and diagnostics
In the AR-centered conceptual model, AI serves as the analytic engine that processes visual and sensor data collected through the AR interface, whether from drones, IoT sensors, or on-site field scanning. Once a problem area is identified, AR enables farmers to relay visual input to an edge AI module for analysis, making the AI layer a vital diagnostic and decision-making node in the system. When the AI confirms an issue, the system delivers context-specific recommendations, whether for manual intervention or robotic action, back to the farmer via an AR interface, through smart glasses as visual overlays, a smartphone screen, or audio prompts, providing step-by-step guidance tailored to the specific crop, condition and location.
This section reviews how AI models integrated with AR interfaces enhance decision support across a range of use cases, including disease diagnosis, image recognition, phenotyping, irrigation planning, crop quality assessment and anomaly detection. Without AR, these AI outputs remain abstract and disconnected from the operational context, limiting their practical utility for time-sensitive, location-specific tasks.
6.2 AI decision support in smart farming
In terms of AI decision support, one key application is with phenomics models, which analyze vast amounts of genetic, phenotypic and environmental data to predict crop performance under different conditions
[50]. This enables breeders to select optimal cultivars with high precision. AR interfaces could further improve this process by overlaying plant trait assessments, allowing farmers to visually compare and select the best crops for their specific environments.
AI-controlled irrigation systems are also transforming water management by using synchronous data to optimize irrigation schedules. These models can analyze soil moisture, evapotranspiration and weather forecasts to predict crop water requirements, helping farmers apply water more efficiently. The integration of high-resolution land data assimilation systems and crop growth models such as AquaCrop has shown water savings between 10% and 40%, without compromising crop yields
[51]. These AI-based irrigation scheduling systems adjust watering thresholds dynamically throughout the crop cycle, preventing over-irrigation and water stress. AR interfaces could improve these systems by visualizing live irrigation data, allowing farmers to interactively monitor soil moisture levels, project water needs and make irrigation adjustments.
The integration of blockchain and AI technologies is also improving agricultural traceability, ensuring secure, tamper-proof records of crop production and distribution. Blockchain-backed systems eliminate the risks of centralized databases by creating a decentralized, transparent ledger, strengthening data security and traceability efficiency
[52]. AI enhances blockchain-based traceability by automating data interpretation, uncovering inefficiencies and supporting quality assurance and sustainability metrics
[53]. Additionally, smart contracts and encryption protocols protect farm data while enabling real-time transaction verification, reducing fraud and ensuring regulatory compliance
[54].
AR could complement these technologies by making traceability data accessible at the point of use. For example, farmers and supply chain managers could scan QR codes placed on produce, equipment, or storage units to visualize AI-generated information on input history, crop quality and compliance metrics through AR interfaces. In logistics, AR-assisted inventory systems could overlay blockchain-verified updates on storage conditions, transport history and product status directly onto physical assets. By linking AR visualization with blockchain-AI systems, these tools could improve operational transparency, strengthen consumer trust and support data-driven decision-making across the agricultural supply chain.
6.3 AI-powered image recognition for AR-based diagnostics
AI-based image recognition is transforming pest and disease detection in agriculture, enabling mobile-compatible diagnostics that form the foundation for AR-integrated systems
[55]. Fig.4 presents an example of an AI-powered image recognition model applied to plant disease detection using photos captured by the authors. The images illustrate how color segmentation and edge detection techniques can be used to highlight potential problem areas in crops. Green bounding boxes with corresponding labels indicate suspected plant health issues, including fungal infections, nutrient deficiencies and pest-related damage. This example serves to demonstrate how AI-based models can assist in the identification of disease symptoms.
CNNs have demonstrated high accuracy in crop pathology classification
[56]. For example, NASNet Mobile and EfficientNetB0 were shown to achieve over 98% precision in detecting black gram diseases
[57]. Combining CNNs with mobile vision transformers enhances both local and global feature extraction, making AI-powered AR diagnostics more adaptable to unpredictable field conditions
[58]. Similarly, domain adaptation models, such as Wasserstein-based unsupervised domain adaptation, improve model generalization by reducing discrepancies between controlled data sets and real-world agricultural environments
[58]. Another study demonstrated the effectiveness of DL-based image recognition for plant disease detection using the YOLOv4 CNN model
[59]. Their study applied object detection to classify leaf diseases, achieving high accuracy in identifying diseased regions in crops. Also, data sets incorporating images across various growth stages, backgrounds and lighting conditions have improved classification accuracy
[60]. Together, these advances highlight the growing maturity of AI-based image recognition as a foundation for field-ready diagnostics.
Building on this technical foundation, recent studies have moved beyond static classification to develop real-time, user-based AR applications. One study developed a YOLOv5-based AR system for plant disease classification, refining semantic segmentation to detect individual leaf abnormalities rather than whole plant structures
[61]. This CNN-based AR model improved disease severity assessment, achieving 70% to 90% accuracy, demonstrating its potential for AR in smart farming.
Another example is the modified pyramidal convolutional shuffle binary attention residual network, an AR-powered cassava disease detection system that integrates DL with AR overlays for diagnostics and fertilizer recommendations
[62]. This system improved feature extraction, while advanced Harris hawk optimization refined model parameters, increasing computational efficiency. Trained on 286 cassava leaf images, the system achieved 99.0% accuracy, surpassing CNN and transformer-based models in classification performance. Additionally, its AR interface provided disease visualizations and delivered disease-specific fertilizer recommendations, achieving a precision of 99.0% and a recall rate of 97.6%. This illustrates the potential of AR not only to detect disease and deliver agronomic advice directly in the field.
Object detection and multimodal imaging approaches further refine AR diagnostics by integrating RGB, infrared and thermal imaging for enhanced crop analytics. YOLOv8-based object detection has demonstrated high accuracy in the early-stage disease identification of leaf blight in taro crops, outperforming current models
[63]. Meanwhile, deep augmented learning frameworks incorporating RGB, infrared and hyperspectral imaging significantly improve banana leaf disease classification
[64]. Transformer-based architectures further optimize feature generalization, improving accuracy in distinguishing early and late-stage infections
[65]. These advances indicate that multispectral overlays in AR interfaces could help surface subtle physiologic changes in crops, enabling more spatially precise interventions. However, realizing this potential in embedded agricultural systems will require translating these models into lightweight, computationally efficient formats compatible with edge AI devices.
In addition to disease detection, AR tools are also being evaluated to improve monitoring and accuracy in areas with insect infestations. An AR-integrated smart glasses system for rice planthopper detection combined high-resolution imaging with DL-based classification
[66]. Using the Cascade-RCNN-PH model, enhanced with adaptive sample matching and a squeeze-and-excitation feature pyramid network, the system improved small-target detection, achieving a recall of 83.4% recall and a precision of 83.6%. In the same study, AR imaging via a mobile application reduced manual labor by 50% while maintaining high detection accuracy in heavily infested fields.
Beyond detection, AR systems are also being used to support decision-making by visualizing insect development stages and guiding appropriate interventions. One such system, designed for organic farming, integrated ML and computer vision to support live analysis of insect presence in the field
[67]. Its CNN model, trained on the IP102_V1.1 data set, a comprehensive benchmark for agricultural insect classification, achieved 90% accuracy in recognizing caterpillars, flea beetles and whiteflies, while also analyzing leaf damage patterns to refine detection accuracy. Using a smartphone-based AR interface, the system overlaid 3D insect models, developmental stages and severity metrics. Based on the classified insect type and stage, the interface then displayed predefined organic treatment options such as neem extract application, crop rotation and the use of beneficial insects.
6.4 AI yield estimation, crop quality and anomaly detection
Beyond pest and disease management, AI and AR are increasingly being used to monitor crop development, estimate yields and detect anomalies throughout the growing cycle. For example, AR-IA facilitates real-time tomato yield estimation and quality assessment
[68]. Built using Unity and ARKit, the system optimizes image capture to minimize redundancy. Trained on 2083 RGB images from UAV-collected and Kaggle data sets, AR-IA improves yield prediction accuracy through AR overlays that display projected yield estimates, supporting preharvest decision-making.
Advancements in AR-powered fruit classification can improve harvesting accuracy by integrating DL-based image processing. An AR-assisted ripeness detection system for greenhouse-grown strawberries was developed, incorporating YOLOv7 for automated fruit classification
[69]. Trained on 8000 images, the model correctly identified ripe fruit with 89% overall accuracy and balanced precision and recall at 92%, making it more effective than previous fruit detection methods. Integrated with Microsoft HoloLens 2, the system enabled live AR visualization of ripeness levels. Applying AR-powered fruit assessment to more crops could help farmers harvest at the right time, improving efficiency and reducing losses after harvest.
AI anomaly detection is improving real-time plant health monitoring and improving disease recognition. For example, diffusion-based models such as CropDetDiff refine disease detection by analyzing plant features at different levels of detail, which could make AR-assisted diagnostics more effective, especially when data are limited
[70]. AI is also improving fruit and crop classification, EfficientNet-based models accurately assess apple quality based on color, shape and texture, reducing the need for manual sorting
[71]. Similarly, multimodal fusion techniques that combine RGB and depth imaging improve tea shoot detection by enhancing image contrast in low-light environments
[72]. Early yield estimation models using YOLOv8 and YOLOv5 have demonstrated strong correlations between detected and actual flower counts, improving harvest planning and labor management
[73].
Advancements in segmentation and object detection models could further enhance AR-assisted agricultural monitoring. Vision transformers combined with Mask R-CNN effectively detect and segment small green citrus fruits in dense orchards, overcoming occlusion and background noise challenges
[74]. FLTrans-Net, a transformer-based feature learning model for wheat head detection, demonstrated superior performance in identifying small, overlapping wheat spikes under challenging field conditions, achieving a mean average precision of 96.1% on the GWHD-2021 data set while maintaining lightweight efficiency
[75]. Another example is AgriDeep-Net, a feature-fusion DL model, that improves fine-grained classification of visually similar plant species, ensuring more accurate plant identification
[76]. Self-supervised learning methods, such as channel randomization, improve anomaly detection by training AI models to detect subtle variations in plant color, outperforming current data augmentation techniques
[77]. These innovations have the potential to improve the ability of AR to detect physiologic stress, disease progression and crop quality changes. As these models are designed for edge computing, they can be used in embedded systems, allowing on-site crop monitoring without relying on cloud computing or high computational demands.
7 Augmented reality for precision agriculture through edge computing
7.1 AR interfaces enhanced by edge computing for precision farming
In the conceptual model described above, edge computing acts as the local processing backbone for AR interfaces. By supporting AI diagnostics and decision-making directly on-site, edge computing could enable AR systems to deliver important data and field-level guidance, even in areas lacking stable cloud access. This is particularly important in rural or connectivity-limited environments, where latency and bandwidth constraints can hinder responsiveness.
Recent studies have investigated how integrating edge computing with AR enables key functions such as spatial mapping, collaborative diagnostics and secure data handling in precision farming. Across these applications, edge computing enhances the responsiveness, energy efficiency and scalability of AR-powered embedded systems.
7.2 Spatial mapping, task offloading and secure farm data transmission
An edge-cloud coordination platform integrating simultaneous localization and mapping (SLAM) with the robot operating system was developed to enhance AR-assisted disease tracking, soil health monitoring and crop stress visualization. The SLAM image analyzer dynamically selects the most efficient SLAM model, reducing power consumption by 30% and achieving an impressive latency of 50 ms, enabling rapid collaboration among farmers, agronomists and researchers
[78].
Alongside spatial mapping, optimized AI-driven task offloading further improves AR-based crop diagnostics and automation. By balancing computing loads between local AR devices and edge servers, a hybrid particle swarm optimization–genetic algorithm model could reduce AR task execution latency, streamlining pest monitoring, disease identification and irrigation control
[79]. Similarly, a quality of service-aware AR task offloading model has been shown to improve execution efficiency by 92%, ensuring fast and reliable processing for farm monitoring
[80].
A major challenge in AR-assisted farming is transmitting data in areas with poor connectivity. A recent study proposed a deep learning and Lagrange optimization-based model to enhance the reliability and efficiency of IoT communication in smart agriculture
[81]. By optimizing transmission distance and reducing interference in environments with overlapping wireless signals, the model significantly improved energy efficiency and data throughput, which are critical for ensuring stable AR-based monitoring in rural settings.
The security of IoT-integrated AR farming networks is another major consideration, especially as AR devices handle large volumes of sensitive farm data. A lightweight authentication protocol using extended Chebyshev chaotic maps and physical unclonable functions reduce computational costs by 50%, making secure AR data exchange feasible for IoT-connected agricultural devices
[82]. This ensures data integrity and cybersecurity in automated precision farming networks, reducing the risk of unauthorized access and system vulnerabilities.
7.3 Edge computing for AR imaging and remote farm management
Edge computing is also advancing high-resolution AR imaging for AI-assisted crop health analysis. A generative AI-powered super-resolution model in multiaccess edge computing environments was developed to improve object detection accuracy by reconstructing high-resolution images from low-resolution inputs, enabling detailed analysis of crop health, disease patterns and pest infestations
[83]. By using edge computing to process these images in this manner, power consumption is reduced, improving power efficiency in edge-based AR diagnostics.
Beyond imaging, edge computing is improving AR farm monitoring and diagnostics in remote areas. Long-range wireless area networks-based task offloading has significantly enhanced AR-enabled field diagnostics by distributing computationally intensive tasks across multitier edge computing nodes, achieving a latency reduction from ~2 to just ~0.3 s
[84]. This could allow for real-time crop health monitoring, soil moisture assessment and environmental stress detection, making low-power AR interfaces practical for rural precision agriculture where cloud access may be limited.
Improving energy efficiency in large-scale AR-powered farm inspections is key to advancing edge computing. A hybrid Monte Carlo tree search-based AR task offloading model, which integrates YOLOv7 for object recognition and structure from motion for 3D mapping, has significantly reduced energy consumption to 1.29 MJ per inspection, while maintaining fast response times of 24 ms
[85]. This reduction means UAVs and mobile AR platforms could operate longer without frequent recharging, making AI-assisted monitoring more viable for extended field inspections.
To further enhance AR-based agricultural diagnostics, DL models must be optimized for deployment in edge environments. RTR_Lite_MobileNetV2, a low-power CNN designed for edge computing, achieves 99.9% accuracy while reducing model size by 53.8%, making it ideal for deployment on IoT-enabled AR farming tools such as Raspberry Pi-based systems
[86]. Integrating these lightweight models supports scalable, battery-conscious deployment of AI-assisted diagnostics, allowing farmers in connectivity-limited regions to benefit from high-performance AR applications without relying on cloud infrastructure.
8 Discussion
8.1 How this review advances beyond prior work
This review positions AR not merely as a visualization tool but as the connective interface that enables embedded AI, IoT, UAV and robotics technologies to operate as a coherent, field-responsive smart farming ecosystem. This study grounds its AR-centered model in empirical findings, where recent literature demonstrates the role of AR across embedded technologies. By reviewing each domain separately and identifying the demonstrated utility of AR within each, we believe we have constructed a credible argument for their unification under a single AR-centered interface. This integrative architecture is not speculative but represents a feasible convergence of independently validated subsystems, grounded in the presented literature.
Recent secondary literature emphasizes the growing importance of AI and AR for precision agriculture. Image-based phenotyping technologies, increasingly deployed via smartphone platforms, are transforming crop diagnostics and trait analysis in field environments
[87]. This reinforces the central argument of our review that AR can serve as a spatial interface to render AI-derived insights actionable at the farm level. The role of bioinspired algorithms, including genetic algorithms, particle swarm optimization, and ant colony optimization, in optimizing key agricultural processes such as pest detection, irrigation scheduling, and machinery path planning has also been explored
[88]. Although this study does not focus on AR, it highlights the importance of intelligent algorithms in guiding decision-making. Our review extends this line of research by positioning AR as the interface layer through which these algorithmic outputs can be delivered in real time and rendered actionable under field operations.
A recent survey of AR and VR in agriculture highlighted that while AR adoption is growing, it has been insufficiently examined in terms of low-cost, mobile-based deployment
[89]. The reported usability barriers and technical limitations support our emphasis on developing smartphone-based AR solutions for smallholder farmers—an option not sufficiently addressed in most technical literature but vital for global scalability. Our contribution builds on this by proposing embedded system architectures based on AR that balance performance with accessibility. Specifically, we reviewed recent AR implementations across various domains to identify system configurations, interface strategies, and hardware platforms that support cost-effective deployment in resource-limited agricultural settings.
AR has been shown to dominate extended reality applications in the agricultural sector, especially for decision-making tasks in real-world contexts
[90]. The need for more field-validated, ergonomic systems highlights a disconnect between experimental tools and practical deployment. This supports our call for participatory design and user-centric development in future AR interfaces.
An analysis of extended reality trends in agriculture confirms the emergence of AR as the leading extended reality modality in precision farming
[91]. Interoperability and data privacy are identified as top challenges—issues we also foreground in our limitations and future research sections. Although that review surveys extended reality broadly, our perspective differentiates itself by specifically framing AR as the interface layer that integrates and operationalizes intelligent embedded systems at scale.
While most existing reviews focus on AR capabilities in isolation, one study emphasizes that the true utility of AR emerges only when coupled with IoT, AI, and GPS-based technologies
[9]. This study describes how AR allows farmers to interpret complex environmental data through overlays on crops or machinery, aligning with our argument that AR closes the loop between sensing, computation, and human action. We extend this by showing how AR could be integrated across edge computing, UAVs, and robotic systems to enable intelligent farming practices.
8.2 Research gaps in AR-embedded agricultural systems
Despite recent progress, significant technical and infrastructural challenges continue to limit the widespread adoption of AR in precision agriculture. Computational efficiency, data processing and scalability remain significant concerns, particularly in rural areas with limited digital infrastructure. The high computational demands of AI-powered AR models require efficient task offloading strategies to edge and cloud computing networks to reduce latency and power consumption
[79]. AI-driven image recognition models used for disease detection and crop monitoring often struggle with domain adaptation, limiting their accuracy in real-world farming conditions compared to controlled data sets
[58]. Coordination across AR platforms, IoT sensor networks and farm machinery also remains a key issue, as inconsistent hardware capabilities and a lack of standardization hinder seamless integration
[20]. In addition, security vulnerabilities in wireless IoT-enabled AR networks pose risks for real-time data exchange, requiring lightweight authentication protocols to ensure safe and reliable communication
[82].
Although both IoT and AR have demonstrated potential, a lack of scalable, low-latency integration pipelines that enable data processing across diverse farm environments persists
[92]. Existing systems often suffer from inconsistent data exchange formats and insufficient support for visualization across different sensor types, environmental conditions and AR device platforms, particularly when dealing with thermal imaging, soil moisture data or multisource sensor fusion.
Most AR-enabled AI decision support tools rely on complex ML models whose outputs are difficult for farmers to interpret. The lack of transparency in how these AI models generate recommendations, often referred to as black-box or opaque reasoning, can undermine farmer trust, highlighting the need for built-in interpretability to explain diagnostic outputs
[93]. Also, explainable AI techniques are largely absent from AR interfaces and farmers currently lack transparent justifications behind algorithmic recommendations.
The scarcity of diverse, field-representative data sets hampers model generalization across regions and crop types, limiting scalability. This also reduces the generalizability of AI models when applied to new regions, climates or crop types. The absence of comprehensive, publicly available data sets also restricts benchmarking and validation, making it difficult to assess system performance across contexts
[94]. Region-specific data sets dominate current development efforts, limiting cross-domain AI training and weakening system performance in unfamiliar or diverse agricultural zones.
Although UAVs are increasingly used in combination with AR for field diagnostics, existing systems are typically designed for single-drone operation. Current UAV-AR systems lack well developed synchronized multidrone capabilities, restricting their effectiveness in large, complex farm networks
[95]. Most current implementations also fail to exploit swarm intelligence or decentralized communication protocols, leaving drone coordination largely dependent on centralized planning or operator input.
Another significant research gap is the limited number of studies examining AR integration across multiple embedded systems simultaneously. Although there is growing literature on AR paired individually with technologies such as IoT, UAVs, robotics or AI, few studies explore how AR can interface with several of these systems in a coordinated manner. Most existing systems focus on isolated real-time visualizations or monitoring tasks, often lacking automation or feedback loops that connect sensing, actuation and decision support. This fragmentation hinders AR from fulfilling its potential as a central, integrative interface.
Additional research gaps include real-time processing limitations in AR-assisted autonomous machinery. Many AR interfaces in robotics and tractors rely on static sensor configurations, which restrict adaptability to changing terrain or equipment behavior
[96]. Without dynamic, real-time edge AI support, current systems struggle with latency in environments where ML predictions must adjusted with immediacy.
Gesture-based AR control systems also face major limitations in field conditions. Environmental variability, including lighting changes, background clutter and operator movement inconsistencies, disrupts the reliability of gesture tracking and spatial inputs
[97]. As a result, their deployment remains limited to highly controlled settings, undermining their potential for hands-free control in real farm environments.
Finally, disease detection using UAVs remains constrained by the limited predictive capabilities of current AR-overlay models. Most implementations visualize raw or basic detection outputs without offering real-time, AI-guided recommendations or early-stage indicators based on plant stress biomarkers
[98]. DL-based systems lack sufficient integration with AR overlays to enable proactive decisions during UAV surveillance missions.
8.3 Limitations of the present review
Although this review offers a structured synthesis of AR integration within embedded agricultural systems, several limitations must be acknowledged. First, the scope of the review was intentionally focused on conceptual and architectural frameworks rather than empirical performance evaluations or field trials. As a result, the practical effectiveness of specific AR solutions in diverse agricultural contexts, particularly in smallholder and resource-constrained settings, could not be fully assessed. Also, the conceptual model proposed in this review, though grounded in current research, has not been field-tested and may require adaptation to specific regional, technological or infrastructural contexts before large-scale deployment.
Second, although this work draws on a carefully selected set of peer-reviewed sources, it may omit relevant studies published in non-indexed regional journals or in languages other than English. This introduces a potential bias toward research emerging from technologically advanced contexts, which may not fully capture the realities of AR deployment in semi-subsistence and small-scale agricultural systems.
Third, the literature reviewed primarily spans developments in AR, AI and IoT between 2020 and early 2025. Given the rapid pace of technological change in smart farming, some recent innovations may have been excluded due to publication or indexing delays. As a result, certain observations may have limited long-term generalizability.
8.4 Future research directions of AR in embedded systems
Building on the identified research gaps and technological limitations, this section outlines key future directions to advance the integration of AR within embedded agricultural systems. First, improving real-time integration between AR and IoT systems remains a foundational challenge. Future studies should develop universal data exchange models and low-latency pipelines that support seamless sensor visualization across heterogeneous farm environments and hardware systems. Enhancing AR visualization of irrigation and fertigation data by incorporating thermal imaging, soil moisture data and environmental forecasting is also another research direction.
To improve automation in smart farm machinery, researchers should focus on reducing latency in AR-guided tractor and robotic interfaces by creating adaptive AI models that account for terrain and machinery variability. In parallel, gesture-based AR control systems require refinement for improved deployment in outdoor conditions, particularly regarding lighting inconsistencies and sensor noise. Expanding robotic harvesting capabilities to include real-time ripeness detection, damage assessment and cooperative human-robot interaction will be essential for advancing autonomous farm operations.
For the aerial contexts, future research should advance UAV-based AR automation by applying reinforcement learning and adaptive AI for flight path and treatment optimization. Further work on UAV systems should explore swarm coordination frameworks and collaborative AR visualization. Additionally, early-stage crop stress prediction using UAVs with hyperspectral and multispectral imaging should be paired with AR overlays for more precise, pre-symptomatic diagnosis.
To enhance model generalization and performance across regions, future efforts should establish global, standardized agricultural data sets for training and validating AI-AR systems across diverse crops, climates and geographies. These systems could also benefit from further research into ultra-low-power AI models and neuromorphic computing, enabling on-the-spot inference on embedded AR devices with minimal energy use.
As farms become increasingly data-driven, multitier edge computing architectures must be optimized using adaptive task scheduling across AR wearables, UAVs and ground systems. Simultaneously, researchers should improve data security through federated learning and lightweight encryption protocols that maintain privacy across AR-IoT platforms.
Another future research priority is the interoperable integration of AR, IoT, UAVs, robotics and AI into unified frameworks that allow seamless data flows and system-wide coordination. Supporting this integration, AI-enhanced AR interfaces should be developed to facilitate responsive interaction between autonomous machinery, drones and sensor networks.
The conceptual model proposed in this review illustrates one possible pathway for realizing this type of unified, AR-centered smart farming system. Future research should focus on validating the model through simulation studies, field deployments and participatory trials that measure both technical feasibility and user interaction. Key priorities include designing middleware that enables communication between system components, evaluating model responsiveness under varying connectivity conditions and studying how farmers interact with AR interfaces when switching between manual and autonomous control modes.
To ensure equitable access, future work must prioritize the design of smartphone-based AR tools that reduce reliance on costly headsets or proprietary hardware, particularly for use in smallholder and resource-constrained contexts. Additionally, the field would benefit from participatory design research involving farmers, agronomists and technicians to align AR system development with real-world agricultural workflows. Building on this agenda, we are currently implementing and evaluating the proposed AR-centered model in smallholder rice farms in Meta, Colombia.
9 Conclusions
This review addressed the core research question of how AR visualization and overlays can serve as a central interface for integrating and enabling the functional deployment of diverse embedded technologies in data-driven precision agriculture. It answers this through four key contributions. First, it introduces a conceptual model to illustrate how AR unifies anomaly detection, diagnostics and responsive action through an AR-centered architecture integrating IoT, UAVs, edge AI and robotic systems. Second, it offers systems-level synthesis of the role of AR across these technologies to demonstrate how AR supports visualization, interaction and decision-making through real-world applications and embedded technical frameworks. Third, it discusses these contributions within the broader literature to emphasize that while research has often examined components in isolation, there is potential for AR to provide an integrative interface for precision agriculture. Finally, the research gaps and future directions are given to reinforce the value of this work by identifying the key challenges that must be addressed to fully implement AR-centric smart farming systems.
This review underscores the importance of accessible, scalable and farmer-centered AR innovations, particularly in resource-limited contexts. By highlighting low-cost deployment strategies, such as smartphone-based AR interfaces, lightweight edge-AI models and modular system design, it helps bridge the gap between emerging technologies and their practical implementation in actual agricultural environments.
Overall, this review endeavors to contribute to the broader knowledge base by proposing a structured roadmap for developing integrated AR systems, grounded in the demonstrated feasibility of AR within individual embedded technologies. The identified gaps and proposed directions point to high-impact opportunities for interdisciplinary collaboration, such as AR-IoT interoperability, lightweight edge AI and explainable AI interfaces. Policymakers could draw on these findings to inform regulatory and funding frameworks that promote inclusive, secure and scalable AR innovations in agriculture. Practitioners, including agronomists and technology developers, could use the conceptual model and literature synthesis provided to assess the readiness and adaptability of AR solutions.
The Author(s) 2025. Published by Higher Education Press. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0)