RNN
Recurrent Neural Network
Code: slm_lab/agent/net/recurrent.py
These networks take a sequence of states as input and produce one or more outputs. They consist of zero or more state processing layers (organized as an MLP). All of the states are passed through the MLP (if there is one) and the transformed states, are passed in sequence to the recurrent layer. RNNs are structured so as to retain information about a sequence of inputs. This makes them well suited to environments in which making a decision about how to act in state S
would benefit from knowing which states came previously.
Source Documentation
Refer to the class documentation and example net spec from the source: slm_lab/agent/net/recurrent.py#L10-L71
Example Net Spec
This specification instantiates a RecurrentNet with two components. First a state processing MLP with with 2 hidden layers of 256 and 128 nodes respectively and rectified linear (RELU) activations. This is followed by one recurrent GRU layer with a hidden state of 64 units. The optimizer is Adam with a learning rate of 0.01. The number of sequential states used as input to the networks is 4. The rest of the spec is annotated below.
For more concrete examples of net spec specific to algorithms, refer to the existing spec files.
Last updated