REINFORCE
Last updated
Last updated
REINFORCE Williams, 1992 directly learns a parameterized policy, , which maps states to probability distributions over actions.
Starting with random parameter values, the agent uses this policy to act in an environment and receive rewards. After an episode has finished, the "goodness" of each action, represented by, , is calculated using the episode trajectory. The parameters of the policy are then updated in a direction which makes good actions more likely, and bad actions less likely. Good actions are reinforced, bad actions are discouraged.
The agent then uses the updated policy to act in the environment, and the training process repeats.
REINFORCE is an on policy algorithm. Only data that is gathered using the current policy can be used to update the parameters. Once the policy parameters have been updated all previous data gathered must be discarded and the collection process started again with the new policy.
There are a number of different approaches to calculating . Method 3, outlined below, is common. It captures the idea that the absolute quality of the actions matters less than their quality relative to some baseline. One option for a baseline is the average of over the training data (typically one episode trajectory).
Algorithm: REINFORCE with baseline
Methods for calculating :
See reinforce.json for example specs of variations of the REINFORCE algorithm.
Basic Parameters
algorithm
name
general param
action_pdtype
general param
action_policy
string specifying which policy to use to act. For example, "Categorical" (for discrete action spaces), "Normal" (for continuous actions spaces with one dimension), or "default" to automatically switch between the two depending on the environment.
gamma
general param
training_frequency
how many episodes of data to collect before each training iteration. A common value is 1.
memory
name
general param. Compatible types; "OnPolicyReplay", "OnPolicyBatchReplay"
batch_size
number of examples to collect before training. Only relevant for batch on policy memory: "OnPolicyBatchReplay"
net
type
general param. Compatible types; all networks.
hid_layers
general param
hid_layers_activation
general param
optim_spec
general param
Advanced Parameters
net
rnn_hidden_size
general param
rnn_num_layers
general param
seq_len
general param
clip_grad
: general param
clip_grad_val
: general param
lr_decay
: general param
lr_decay_frequency
: general param
lr_decay_min_timestep
: general param
lr_anneal_timestep
: general param
gpu
: general param
entropy
whether to add entropy to the to encourage exploration
entropy_coef
coefficient to multiply the entropy of the distribution with when adding it to