SLM Lab
v4.1.1
v4.1.1
  • SLM Lab
  • 🖥Setup
    • Installation
    • Quick Start
  • 🚀Using SLM Lab
    • Lab Command
    • Lab Organization
    • Train and Enjoy: REINFORCE CartPole
    • Agent Spec: DDQN+PER on LunarLander
    • Env Spec: A2C on Pong
    • GPU Usage: PPO on Pong
    • Parallelizing Training: Async SAC on Humanoid
    • Experiment and Search Spec: PPO on Breakout
    • Run Benchmark: A2C on Atari Games
    • Meta Spec: High Level Specifications
    • Post-Hoc Analysis
    • TensorBoard: Visualizing Models and Actions
    • Using SLM Lab In Your Project
  • 📈Analyzing Results
    • Data Locations
    • Graphs and Data
    • Performance Metrics
  • 🥇Benchmark Results
    • Public Benchmark Data
    • Discrete Environment Benchmark
    • Continuous Environment Benchmark
    • Atari Environment Benchmark
    • RL GIFs
  • 🔧Development
    • Modular Design
      • Algorithm Taxonomy
      • Class Inheritance: A2C > PPO
    • Algorithm
      • DQN
      • REINFORCE
      • Actor Critic
    • Memory
      • Replay
      • PrioritizedReplay
      • OnPolicyReplay
      • OnPolicyBatchReplay
    • Net
      • MLP
      • CNN
      • RNN
    • Profiling SLM Lab
  • 📖Publications and Talks
    • Book: Foundations of Deep Reinforcement Learning
    • Talks and Presentations
  • 🤓Resources
    • Deep RL Resources
    • Contributing
    • Motivation
    • Help
    • Contact
Powered by GitBook
On this page
  • Replay
  • Source Documentation
  • Example Memory Spec

Was this helpful?

  1. 🔧Development
  2. Memory

Replay

PreviousMemoryNextPrioritizedReplay

Last updated 5 years ago

Was this helpful?

Replay

Code:

Experiences are stored in a circular buffer which grows until the memory has reached capacity. Once the memory is at capacity the oldest experiences are deleted to make space for the newest.

Batches of size batch_size are sampled from the entire memory. Sampling is random uniform unless "PrioritizedReplay" memory is used.

Suitable for off-policy algorithms.

Source Documentation

Refer to the class documentation and example memory spec from the source:

Example Memory Spec

This specification creates a Replay memory with a maximum capacity of 10,000 elements, i.e. it can store experiences for 10,000 time steps. When memory.sample() is called, it will return a batch of 32 elements. We also specify the use of CER (Combined Experience Replay), which guarantees the latest experience is included in the samples.

{
    ...
    "agent": [{
      "memory": {
        "name": "Replay",
        "batch_size": 32,
        "max_size": 10000,
        "use_cer": true
      }
    }],
    ...
}

For more concrete examples of memory spec specific to algorithms, refer to the existing .

master/slm_lab/agent/memory/replay.py
slm_lab/agent/memory/replay.py#L43-L67
spec files