Q learning-based navigation for mobile robots in continuous and dynamic environments

Autonomous collision-free navigation in dynamic environments is essential in many mobile robot applications. Reinforcement learning has the potential to automate the control design process and it has been successfully applied to path planning and mobile robot navigation. Obtaining effective navigation strategies with reinforcement learning is, however, still very time-consuming, especially in complex continuous environments where the robot can easily get trapped.

 In this work, we propose a novel state space definition that includes information about the robot's most recent action. The proposed state variables for the target and nearby obstacles are binary and denote presence (or absence) within a corresponding region in the robot's frame of reference. In addition, we propose two heuristic algorithms that provide the robot with basic prior knowledge about a promising action in each state, which reduces the initial time-consuming blind exploration and thereby significantly shortens training time. We train a robot using our improved Q-learning approach to navigate in continuous environments in a high-fidelity simulator.

Create your website for free! This website was made with Webnode. Create your own for free today! Get started