SLM Lab
Search…
MLP

Multi-Layer Perceptron

These networks take a single state as input. They are composed of a sequence of dense (fully connected) layers. MLPs are general purpose, simple networks. Well suited for environments with a low dimensional state space, or a state space with no spatial structure.

Source Documentation

Refer to the class documentation and example net spec from the source: slm_lab/agent/net/mlp.py#L12-L58​

Example Net Spec

This specification instantiates an MLP with 3 hidden layers of 256, 128, and 64 nodes respectively, rectified linear (ReLU) activations, and the Adam optimizer with a learning rate of 0.02. The rest of the spec is annotated below.
1
{
2
...
3
"agent": [{
4
"net": {
5
"type": "MLPNet",
6
"shared": false, // whether to shared networks for Actor-Critic
7
"hid_layers": [256, 128, 64],
8
"hid_layers_activation": "relu",
9
"out_layer_activation": null, // output layer activation
10
"init_fn": "xavier_uniform_", // weight initialization
11
"clip_grad_val": 1.0, // clip gradient by norm
12
"loss_spec": { // default loss function used for regression
13
"name": "MSELoss"
14
},
15
"optim_spec": { // the optimizer and its arguments
16
"name": "Adam",
17
"lr": 0.02
18
},
19
...
20
}
21
}],
22
...
23
}
Copied!
For more concrete examples of net spec specific to algorithms, refer to the existing spec files.