Abstract

The advancement of Artificial Intelligence (AI), particularly in the field of Reinforcement Learning (RL), has led to significant breakthroughs in numerous domains, ranging from autonomous systems to complex game environments. Among this progress, the emergence and evolution of algorithms like Deep Q-Networks (DQN), Deep Deterministic Policy Gradients (DDPG), and Proximal Policy Optimization (PPO) have been pivotal. These algorithms, each with unique approaches and strengths, have become fundamental in tackling diverse RL challenges. This study aims to dissect and compare these three influential algorithms to provide a clearer understanding of their mechanics, efficiencies, and applicability. We delve into the theoretical underpinnings of DQN, DDPG, and PPO, and assess their performances across a variety of standard benchmarks. Through this comparative analysis, we seek to offer valuable insights for choosing the right algorithms for different environments and highlight potential pathways for future research in the field of Reinforcement Learning.

Document Type

Article

Author's School

McKelvey School of Engineering

Author's Department

Electrical and Systems Engineering

Class Name

Electrical and Systems Engineering Undergraduate Research

Date of Submission

12-6-2023

Share

COinS