Abstract
Air-ground cooperation is of great importance in certain extreme environment missions, which require real-time tracking of the unmanned aerial vehicle (UAV) on the ground vehicle. As reinforcement learning (RL) has achieved great success in many challenges of planning and control, a research on path planning of UAVs for object tracking based on RL is presented, considering images from a visual sensor as the input. Convolutional neural networks (CNN) and a spatial soft-max layer are used to detect the object in the images. The tracking results of a filter CSR-DCF based on OpenCV are combined with the output of CNN to improve training efficiency and obtain better tracking performance. Three independent experiments are conducted with different conditions in simulated environments where a quadcopter is trained to track a ground robot based on V-REP. Valid results show that the UAV has good performance in the tracking work with MAE of 0.23m in x and 0.19m in y.
Funding Information
  • National Natural Science Foundation of China