Page 26, Section 2.2 The Objective Function, Equation 2.1
Equation 2.1 misplaces the prime symbol due to a Latex formatting error. It was
Rt(τ)=t′=t∑Tγt′−trt′ (misplaced prime on r′) Instead it should have been
Rt(τ)=t′=t∑Tγt′−trt′ Page 27, Section 2.3 The Policy Gradient, Equation 2.3
Equation 2.3 contains a typo. Following from equation 2.2, the max argument should be applied on both side of the equation. It was,
θmaxJ(πθ)=Eτ∼πθ[R(τ)] (missing max on the right) Instead it should have been
θmaxJ(πθ)=θmaxEτ∼πθ[R(τ)] Page 28, Section 2.3.1 Policy Gradient Derivation, Equation 2.9
In the chain of derivation, equation (2.9) states that the step used is (chain-rule), but in fact it is (product-rule).
Page 30, Section 2.3.1 Policy Gradient Derivation, Equation 2.21
Equation 2.21 misses a step in derivation:
By substituting Equation 2.20 into 2.15 and bringing in the multiplier R(τ), we obtain
∇θJ(πθ)=Eτ∼πθ[t=0∑TRt(τ)∇θlogπθ(at∣st)] (2.21) Actually, the substitution yields:
∇θJ(πθ)=Eτ∼πθ[t=0∑TR(τ)∇θlogπθ(at∣st)] (missing step 1) There is an additional step which modifies R(τ) to give us equation 2.21. The form above has a high variance due to the many possible actions over a trajectory. One way to reduce the variance is to account for causality by only considering the future rewards for any given time step t. This makes sense since event occurring at time step t can only affect the future but not the past. To do so, we modify R(τ) as follows:
R(τ)=R0(τ)=t′=0∑Tγt′rt′→t′=t∑Tγt′−trt′=Rt(τ) (missing step 2) With this, we obtain equation 2.21.