Skip to content

KezhiAdore/DcPPO

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DcPPO: Dual-constrained Proximal Policy Optimization

Introduction

This repo is the official implementation of DcPPO

Run

  1. Create a virtual environment using conda and activate it:
conda create -n dcppo python=3.10 -y
conda activate dcppo
  1. Install the required packages:
pip install -r requirements.txt
  1. Run the training script:
# run DcPPO 
python run.py --env dm_control/ball_in_cup-catch-v0 --device cpu --seed 0
# run stable baselines3
python sb_run.py --env dm_control/ball_in_cup-catch-v0 --algo ppo --device cpu --seed 0
  1. Use tensorboard to visualize the training process:
tensorboard --logdir logs

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages