Skip to main content
placeholder image

Path following Optimization for an Underactuated USV Using Smoothly-Convergent Deep Reinforcement Learning

Journal Article


Abstract


  • This paper aims to solve the path following problem for an underactuated unmanned-surface-vessel (USV) based on deep reinforcement learning (DRL). A smoothly-convergent DRL (SCDRL) method is proposed based on the deep Q network (DQN) and reinforcement learning. In this new method, an improved DQN structure was developed as a decision-making network to reduce the complexity of the control law for the path following of a three-degree of freedom USV model. An exploring function was proposed based on the adaptive gradient descent to extract the training knowledge for the DQN from the empirical data. In addition, a new reward function was designed to evaluate the output decisions of the DQN, and hence, to reinforce the decision-making network in controlling the USV path following. Numerical simulations were conducted to evaluate the performance of the proposed method. The analysis results demonstrate that the proposed SCDRL converges more smoothly than the traditional deep Q learning while the path following error of the SCDRL is comparable to existing methods. Thanks to good usability and generality of the proposed method for USV path following, it can be applied to practical applications.

UOW Authors


  •   Li, Zhixiong (external author)

Publication Date


  • 2021

Citation


  • Zhao, Y., Qi, X., Ma, Y., Li, Z., Malekian, R., & Sotelo, M. A. (2021). Path following Optimization for an Underactuated USV Using Smoothly-Convergent Deep Reinforcement Learning. IEEE Transactions on Intelligent Transportation Systems, 22(10), 6208-6220. doi:10.1109/TITS.2020.2989352

Scopus Eid


  • 2-s2.0-85101067706

Start Page


  • 6208

End Page


  • 6220

Volume


  • 22

Issue


  • 10

Place Of Publication


Abstract


  • This paper aims to solve the path following problem for an underactuated unmanned-surface-vessel (USV) based on deep reinforcement learning (DRL). A smoothly-convergent DRL (SCDRL) method is proposed based on the deep Q network (DQN) and reinforcement learning. In this new method, an improved DQN structure was developed as a decision-making network to reduce the complexity of the control law for the path following of a three-degree of freedom USV model. An exploring function was proposed based on the adaptive gradient descent to extract the training knowledge for the DQN from the empirical data. In addition, a new reward function was designed to evaluate the output decisions of the DQN, and hence, to reinforce the decision-making network in controlling the USV path following. Numerical simulations were conducted to evaluate the performance of the proposed method. The analysis results demonstrate that the proposed SCDRL converges more smoothly than the traditional deep Q learning while the path following error of the SCDRL is comparable to existing methods. Thanks to good usability and generality of the proposed method for USV path following, it can be applied to practical applications.

UOW Authors


  •   Li, Zhixiong (external author)

Publication Date


  • 2021

Citation


  • Zhao, Y., Qi, X., Ma, Y., Li, Z., Malekian, R., & Sotelo, M. A. (2021). Path following Optimization for an Underactuated USV Using Smoothly-Convergent Deep Reinforcement Learning. IEEE Transactions on Intelligent Transportation Systems, 22(10), 6208-6220. doi:10.1109/TITS.2020.2989352

Scopus Eid


  • 2-s2.0-85101067706

Start Page


  • 6208

End Page


  • 6220

Volume


  • 22

Issue


  • 10

Place Of Publication