Last updated
Last updated
Code:
Algorithm is the main class which implements an RL algorithm. This includes declaring its networks and variables, acting, sampling from memory, and training. It initializes its networks and memory by simply calling the and classes with their specs. The loss functions for the algorithms is also implemented here.
Each algorithm comes with a number of hyperparameters that can be specified through a .
name: name of an implemented algorithm class. This must be a class that conforms to the and is saved in a .py
file under
action_pdtype: specifies the probability distribution that actions are sampled from. For example, "Argmax" or "Categorical" for discrete action spaces, or "Normal", "MultivariateNormal", and "Gumbel" for continuous action spaces. These are declared in
action_policy: specifies how the agent should act. e.g. "epsilon_greedy". These are declared in
gamma how much to discount the future for the returns. 0 corresponds to complete myopia, the agent only cares about the current time step. 1 corresponds to no discounting. Each future state matters as much as the current state.
Other algorithm spec hyperparameters are specific to algorithm implementations. For those, refer to the class documentation of algorithms in .
For more concrete examples of algorithm spec specific to algorithms, refer to the existing .
To learn more about algorithms, check out .
The subpages to follow showcase a subset of algorithms in SLM Lab. See for the list of implemented algorithms in SLM Lab.