SLM Lab
Search…
RNN

Recurrent Neural Network

These networks take a sequence of states as input and produce one or more outputs. They consist of zero or more state processing layers (organized as an MLP). All of the states are passed through the MLP (if there is one) and the transformed states, are passed in sequence to the recurrent layer. RNNs are structured so as to retain information about a sequence of inputs. This makes them well suited to environments in which making a decision about how to act in state S would benefit from knowing which states came previously.

Source Documentation

Refer to the class documentation and example net spec from the source: slm_lab/agent/net/recurrent.py#L10-L71​

Example Net Spec

This specification instantiates a RecurrentNet with two components. First a state processing MLP with with 2 hidden layers of 256 and 128 nodes respectively and rectified linear (RELU) activations. This is followed by one recurrent GRU layer with a hidden state of 64 units. The optimizer is Adam with a learning rate of 0.01. The number of sequential states used as input to the networks is 4. The rest of the spec is annotated below.
1
{
2
...
3
"agent": [{
4
"net": {
5
"type": "RecurrentNet",
6
"shared": false, // whether to shared networks for Actor-Critic
7
"cell_type": "GRU",
8
"fc_hid_layers": [256, 128],
9
"hid_layers_activation": "relu",
10
"out_layer_activation": null,
11
"rnn_hidden_size": 64,
12
"rnn_num_layers": 1,
13
"bidirectional": false, // whether to use bidirectional layer
14
"seq_len": 4,
15
"init_fn": "xavier_uniform_", // weight initialization
16
"clip_grad_val": 1.0, // clip gradient by norm
17
"loss_spec": { // default loss function used for regression
18
"name": "MSELoss"
19
},
20
"optim_spec": { // the optimizer and its arguments
21
"name": "Adam",
22
"lr": 0.01
23
},
24
...
25
}
26
}],
27
...
28
}
Copied!
For more concrete examples of net spec specific to algorithms, refer to the existing spec files.