๐ŸŽฒREINFORCE

REINFORCE Williams, 1992arrow-up-right directly learns a parameterized policy, ฯ€\pi, which maps states to probability distributions over actions.

Starting with random parameter values, the agent uses this policy to act in an environment and receive rewards. After an episode has finished, the "goodness" of each action, represented by, f(ฯ„)f(\tau), is calculated using the episode trajectory. The parameters of the policy are then updated in a direction which makes good actions (f(ฯ„)>0)(f(\tau) > 0) more likely, and bad actions (f(ฯ„)<0)(f(\tau) < 0) less likely. Good actions are reinforced, bad actions are discouraged.

The agent then uses the updated policy to act in the environment, and the training process repeats.

REINFORCE is an on policy algorithm. Only data that is gathered using the current policy can be used to update the parameters. Once the policy parameters have been updated all previous data gathered must be discarded and the collection process started again with the new policy.

There are a number of different approaches to calculating f(ฯ„)f(\tau). Method 3, outlined below, is common. It captures the idea that the absolute quality of the actions matters less than their quality relative to some baseline. One option for a baseline is the average of f(ฯ„)f(\tau) over the training data (typically one episode trajectory).

Algorithm: REINFORCE with baseline

Initializeย weightsย ฮธ,ย learningย rateย ฮฑforย eachย episodeย (trajectory)ย ฯ„={s0,a0,r0,s1,โ‹ฏโ€‰,rT}โˆผฯ€ฮธforย t=0ย toย Tย doฮธโ†ฮธ+ฮฑย f(ฯ„)tโˆ‡ฮธlogย ฯ€ฮธ(atโˆฃst)endย forendย for\begin{aligned} & \text{Initialize weights } \theta \text{, learning rate } \alpha \\ & \text{for each episode (trajectory) } \tau = \{s_0, a_0, r_0, s_1, \cdots, r_T\} \sim \pi_\theta \\ & \quad \text{for } t = 0 \text{ to } T \text{ do} \\ & \quad \quad \theta \leftarrow \theta + \alpha \ f(\tau)_t \nabla_\theta log ~ \pi_\theta(a_t|s_t) \\ & \quad \text{end for} \\ & \text{end for} \\ \end{aligned}

Methods for calculating f(ฯ„)tf(\tau)_t:

Givenย โˆ‡ฮธJ(ฮธ)ย โ‰ˆโˆ‘tโ‰ฅ0f(ฯ„)โˆ‡ฮธlogฯ€ฮธ(atโˆฃst),ย improveย baselineย with:ย 1.ย rewardย asย weightageย f(ฯ„)=โˆ‘tโ€ฒโ‰ฅtrtโ€ฒ2.ย addย discountย factorย f(ฯ„)=โˆ‘tโ€ฒโ‰ฅtฮณtโ€ฒโˆ’trtโ€ฒ3.ย introduceย baselineย f(ฯ„)=โˆ‘tโ€ฒโ‰ฅtฮณtโ€ฒโˆ’trtโ€ฒโˆ’b(st)\begin{aligned} & \text{Given } \nabla_\theta J(\theta) \ \approx \sum_{t \geq 0} f(\tau) \nabla_\theta log \pi_\theta(a_t|s_t), ~ \text{improve baseline with: }\\ & \quad \quad 1.\ \text{reward as weightage } f(\tau) = \sum\limits_{t' \geq t} r_{t'} \\ & \quad \quad 2.\ \text{add discount factor } f(\tau) = \sum\limits_{t' \geq t} \gamma^{t'-t} r_{t'} \\ & \quad \quad 3.\ \text{introduce baseline } f(\tau) = \sum\limits_{t' \geq t} \gamma^{t'-t} r_{t'} - b(s_t) \\ \end{aligned}

See slm_lab/spec/benchmark/reinforce/arrow-up-right for example REINFORCE specs.

Basic Parameters

    "agent": {
      "name": str,
      "algorithm": {
        "name": str,
        "action_pdtype": str,
        "action_policy": str,
        "gamma": float,
        "training_frequency": int,
        "entropy_coef_spec": {...},
      },
      "memory": {
        "name": str,
      },
      "net": {
        "type": str,
        "hid_layers": list,
        "hid_layers_activation": str,
        "optim_spec": dict,
      }
    },
    ...
}
  • algorithm

    • action_pdtype general param

    • action_policy string specifying which policy to use to act. For example, "Categorical" (for discrete action spaces), "Normal" (for continuous actions spaces with one dimension), or "default" to automatically switch between the two depending on the environment.

    • training_frequency how many episodes of data to collect before each training iteration. A common value is 1.

    • entropy_coef_spec schedule for entropy coefficient added to the loss to encourage exploration. Example: {"name": "no_decay", "start_val": 0.01, "end_val": 0.01, "start_step": 0, "end_step": 0}

    • center_return (optional, default false) whether to center returns by subtracting the mean before computing policy gradient. Can improve training stability.

  • net

Advanced Parameters

Last updated

Was this helpful?