Wheeled robots enjoy popularity in extensive areas such as food delivery and room disinfection. They can lower labor costs, protect human health from infection, and so on. Given the need to avoid obstacles, the path planning of robots is an elementary module. The A* algorithm has been widely used thus far, but it suffers much memory overhead and provides a suboptimal path. Therefore, we propose an improved A* algorithm with the jump point search method and pruning idea. Specifically, the jump point search method reduces the occupancy rate of the open list. The shorter length of the path can be achieved by pruning. Simulation experiments proved that the improvement was effective and practical.
Hybrid impedance/admittance control aims to provide an adaptive behavior to the manipulator in order to interact with the surrounding environment. In fact, impedance control is suitable for stiff environments, while admittance control is suitable for soft environments/free motion. Hybrid impedance/admittance control, indeed, allows modulating the control actions to exploit the combination of such behaviors. While some work has addressed the proposed topic, there are still some open issues to be solved. In particular, the proposed contribution aims: (i) to satisfy the continuity of the interaction force in the switching from impedance to admittance control when a feedforward velocity term is present; and (ii) to adapt the switching parameters to improve the performance of the hybrid control framework to better exploit the properties of both impedance and admittance controllers. The proposed approach was compared in simulation with the standard hybrid impedance/admittance control in order to show the improved performance. A Franka EMIKA panda robot was used as a reference robotic platform to provide a realistic simulation.
Offshore crane operations are frequently carried out under adverse weather conditions. While offshore cranes attempt to finish the load-landing or -lifting operation, the impact between the loads and the vessels is critical, as it can cause serious injuries and extensive damage. Multiple offshore crane operations, including load-landing operations, have used reinforcement learning (RL) to control their activities. In this paper, the Q-learning algorithm is used to develop optimal control sequences for the offshore crane's actuators to minimize the impact velocity between the crane's load and the moving vessel. To expand the RL environment, a mathematical model is constructed for the dynamical analysis utilizing the Denavit–Hartenberg (DH) technique and the Lagrange approach. The Double Q-learning algorithm is used to locate the common bias in Q-learning algorithms. The average return feature was studied to assess the performance of the Q-learning algorithm. Furthermore, the trained control sequence was tested on a separate sample of episodes, and the hypothesis that, unlike supervised learning, reinforcement learning cannot have a global optimal control sequence but only a local one, was confirmed in this application domain.
This paper presents a power dispatch strategy combining the main grid and distributed generators based on aggregative game theory and the Cournot price mechanism. Such a dispatch strategy aims to increase the electricity under the power shortage situation. Under the proposed strategy, this paper designs a discrete-time algorithm fusing the estimation technique and the Digging method to solve the power shortage problem in a distributed way. The distributed algorithm can provide privacy protection and information safety and improve the power grid's extendibility. Moreover, the simulation results show that the proposed algorithm has favorable performance and effectiveness in the numerical example.