โญPrioritizedReplay

PrioritizedReplay

Code: slm_lab/agent/memory/prioritized.pyarrow-up-right

Prioritized Experience Replay (PER)arrow-up-right extends from Replay by calculating prioritization for sampling experiences based on errors in Q-values estimation.

Suitable for off-policy algorithms.

Source Documentation

Refer to the class documentation and example memory spec from the source: slm_lab/agent/memory/prioritized.py#L87-L104arrow-up-right

Example Memory Spec

This specification creates a PrioritizedReplay (off-policy) memory with a maximum capacity of 10,000 elements, with a batch size of 32, and CER is disabled. The alpha and epsilon parameters are specific to PER in computing the errors.

{
    ...
    "agent": {
      "memory": {
        "name": "PrioritizedReplay",
        "alpha": 0.6,
        "epsilon": 0.0001,
        "batch_size": 32,
        "max_size": 50000,
        "use_cer": false
      }
    },
    ...
}

For more concrete examples of memory spec specific to algorithms, refer to the existing spec filesarrow-up-right.

Last updated

Was this helpful?