DQN
Deep Q-Learning
Q-learning (Watkins, 1989, Mnih et. al, 2013) algorithms estimate the optimal Q function, i.e the value of taking action A in state S under the optimal policy. Q-learning algorithms have an implicit policy (strategy for acting in the environment). This is typically -greedy, in which the action with the maximum Q value is selected with probability and a random action is taken with probability , or boltzmann (see definition below). Random actions encourage exploration of the state space and help prevent algorithms from getting stuck in local minima.
Q-learning algorithms are off-policy algorithms because the target value used to train the network is independent of the policy used to generate the training data. This makes it possible to use experience replay to train an agent.
It is bootstrapped algorithm; updates to the Q function are based on existing estimates, and a temporal difference algorithm; the estimate in time t is updated using an estimate from time t+1. This allows Q-Learning algorithms to be online and incremental, so the agent can be trained during an episode.
Algorithm: DQN with target network
See dqn.json for example specs of variations of the DQN algorithm (e.g. DQN, DoubleDQN, DRQN). Parameters are explained below.
Basic Parameters
"agent": [{
"name": str,
"algorithm": {
"name": str,
"action_pdtype": str,
"action_policy": str,
"explore_var_start": float,
"explore_var_end": float,
"explore_anneal_epi": int,
"gamma": float,
"training_batch_epoch": int,
"training_epoch": int,
"training_frequency": int,
},
"memory": {
"name": str,
"batch_size": int,
"max_size": int
},
"net": {
"type": str,
"hid_layers": list,
"hid_layers_activation": str,
"optim_spec": dict,
}
}],
...
}algorithmnamegeneral paramaction_pdtypegeneral paramaction_policystring specifying which policy to use to act. "boltzmann" or "epsilon_greedy"."boltzmann" policy selects actions by sampling from a probability distribution over the actions. This is generated by taking a softmax over all the Q-values (estimated by a neural network) for a state, adjusted by the temperature parameter, tau.
"epsilon_greedy" policy selects a random action with probability epsilon, and the action corresponding to the maximum Q-value with (1 - epsilon).
explore_var_startinitial value for the exploration parameters (tau or epsilon)explore_var_endend value for the exploration parameters (tau or epsilon)explore_anneal_epihow many episodes to take to reduce the exploration parameter value from start to end. Reduction is currently linear.gammageneral paramtraining_batch_epochhow many gradient updates to make per batch.training_epochhow many batches to sample from the replay memory each time the agent is trainedtraining_frequencyhow often to train the algorithm. Value of 3 means train every 3 steps the agent takes in the environment.
memorynamegeneral param. Compatible types; "Replay", "PrioritizedReplay"batch_sizehow many examples to include in each batch when sampling from the replay memory.max_sizemaximum size of the memory. Once the memory has reached maximum capacity, the oldest examples are deleted to make space for new examples.
nettypegeneral param. Compatible types; all networkshid_layersgeneral paramhid_layers_activationgeneral paramoptim_specgeneral param
Advanced parameters
"agent": [{
"algorithm": {
"training_min_timestep": int,
"action_policy_update": str,
},
"memory": {
"use_cer": bool
},
"net": {
"rnn_hidden_size": int,
"rnn_num_layers": int,
"seq_len": int,
"update_type": str,
"update_frequency": int,
"polyak_weight": float,
"clip_grad": bool,
"clip_grad_val": float,
"loss_spec": dict
"lr_decay": str,
"lr_decay_frequency": int,
"lr_decay_min_timestep": int,
"lr_anneal_timestep": int,
"gpu": int
}
}],
...
}algorithmtraining_min_timestephow many time steps to wait before starting to train. It can be useful to set this to 0.5 - 1x the batch size so that theDQNhas a few examples to learn from in the first training iterations.action_policy_updatehow to update theexplore_varparameter in the action policy each episode. Available options are "linear_decay", "rate_decay", and "periodic_decay". See policy_util.py for more details.
memoryuse_cer: whether to used Combined Experience Replay
netrnn_hidden_sizegeneral paramrnn_num_layersgeneral paramseq_lengeneral paramupdate_typemethod of updatingtarget_net. "replace" or "polyak". "replace" replacestarget_netwithneteveryupdate_frequencytime steps. "polyak" updatestarget_netwithpolyak_weighttarget_net+ (1 -polyak_weight)neteach time step.update_frequencyhow often to updatetarget_netwithnetwhen using "replace"update_type.polyak_weighthow much weight to give the oldtarget_netwhen updating thetarget_netusing "polyak"update_typeclip_grad: general paramclip_grad_val: general paramloss_spec: general paramlr_decay: general paramlr_decay_frequency: general paramlr_decay_min_timestep: general paramlr_anneal_timestep: general paramgpu: general param
Last updated
Was this helpful?