SLM Lab
v4.2.0
v4.2.0
  • SLM Lab
  • 🖥Setup
    • Installation
    • Quick Start
  • 🚀Using SLM Lab
    • Lab Command
    • Lab Organization
    • Train: REINFORCE CartPole
    • Resume and Enjoy: REINFORCE CartPole
    • Agent Spec: DDQN+PER on LunarLander
    • Env Spec: A2C on Pong
    • GPU Usage: PPO on Pong
    • Parallelizing Training: Async SAC on Humanoid
    • Experiment and Search Spec: PPO on Breakout
    • Run Benchmark: A2C on Atari Games
    • Meta Spec: High Level Specifications
    • Post-Hoc Analysis
    • TensorBoard: Visualizing Models and Actions
    • Using SLM Lab In Your Project
  • 📈Analyzing Results
    • Data Locations
    • Graphs and Data
    • Performance Metrics
  • 🥇Benchmark Results
    • Public Benchmark Data
    • Discrete Environment Benchmark
    • Continuous Environment Benchmark
    • Atari Environment Benchmark
    • RL GIFs
  • 🔧Development
    • Modular Design
      • Algorithm Taxonomy
      • Class Inheritance: A2C > PPO
    • Algorithm
      • DQN
      • REINFORCE
      • Actor Critic
    • Memory
      • Replay
      • PrioritizedReplay
      • OnPolicyReplay
      • OnPolicyBatchReplay
    • Net
      • MLP
      • CNN
      • RNN
    • Profiling SLM Lab
  • 📖Publications and Talks
    • Book: Foundations of Deep Reinforcement Learning
    • Talks and Presentations
  • 🤓Resources
    • Deep RL Resources
    • Contributing
    • Motivation
    • Help
    • Contact
Powered by GitBook
On this page

Was this helpful?

  1. 🔧Development

Memory

PreviousActor CriticNextReplay

Last updated 5 years ago

Was this helpful?

Memory API

Code:

Memory is a class for data storage and access consistent with the RL agent API, i.e. implementing update and sample methods. The underlying data format is numpy, which can efficiently be put into PyTorch tensors using shared memory via torch.from_numpy. There are two types of memory in RL:

For off-policy algorithms

  • Replay

  • PrioritizedReplay

For on-policy algorithms:

  • OnPolicyReplay

  • OnPolicyBatchReplay

Reinforcement Learning (RL) agents learn by acting in environments. Each time an agent acts, it stores <s, a, s', r>, the state, action taken, next state, and reward received, as an experience in memory.

RL algorithms vary in the way they make use of past experiences, for example by sampling batches from the last N experiences the agent has had or by using all the experiences gathered since the agent was last trained.

Since RL agent trains based on the time steps/experience collected, the signal for training comes from the memory class in memory.to_train. That is, the memory class sets self.to_train = True when it is time to train.

🏗️
slm_lab/agent/memory